aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
math0304100
2115080784
The Shub-Smale Tau Conjecture is a hypothesis relating the number of integral roots of a polynomial f in one variable and the Straight-Line Program (SLP) complexity of f. A consequence of the truth of this conjecture is that, for the Blum-Shub-Smale model over the complex numbers, P differs from NP. We prove two weak versions of the Tau Conjecture and in so doing show that the Tau Conjecture follows from an even more plausible hypothesis. Our results follow from a new p-adic analogue of earlier work relating real algebraic geometry to additive complexity. For instance, we can show that a nonzero univariate polynomial of additive complexity s can have no more than 15+s^3(s+1)(7.5)^s s! =O(e^ s s ) roots in the 2-adic rational numbers Q_2, thus dramatically improving an earlier result of the author. This immediately implies the same bound on the number of ordinary rational roots, whereas the best previous upper bound via earlier techniques from real algebraic geometry was a quantity in Omega((22.6)^ s^2 ). This paper presents another step in the author's program of establishing an algorithmic arithmetic version of fewnomial theory.
As for earlier earlier approaches to the @math -conjecture, some investigations into @math were initiated by de Melo and Svaiter @cite_8 (in the special case of constant polynomials) and Moreira @cite_14 . Unfortunately, not much more is known than (a) @math is usually'' bounded below by @math (where @math is @math and the maximum ranges over all coefficients @math of @math ) and (b) @math for @math sufficiently large [Thm. 3] gugu .
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "1582746416", "1542260560" ], "abstract": [ "Let Or(n) (the cost of n) be the minimum number of arithmetic operations needed to obtain n starting from 1. We prove that Or(n) > log n -log log n for almost all n E N, and, given ? > 0, Or(n) 1 there are i, j, 0 0, we have T((n) > (loglo gn)l+e for almost all n C N . Here we improve the above results, by proving the following: For almost all n c N we have Tr(n) > lolg nand, for any given E > 0, we have T((n) 0, T(n) > logn + (1 -)log nloglog logn for almost all n c N, log log n (log log n) 2 T((n) < log n + (3 + s log n log log log n for n large enough. log log n (log log n) 2 foiag nuh Moreover, the first inequality does not depend on the number of binary operations that we can use (provided that this number is finite). Received by the editors October 25, 1994 and, in revised form, August 15, 1995. 1991 Mathematics Subject Classification. Primary 11Y16; Secondary 68Q25, 68Q15, 11B75. ( 1997 American Mathematical Society", "A simplified rotor design utilizing two or less cavities per sample analysis station is described. Sample or reagent liquids are statically loaded directly into respective sample analysis cuvettes by means of respective apertures and centripet al ramps communicating with each cuvette. According to one embodiment, a single static loading cavity communicates with each sample analysis cuvette in a conventional manner to facilitate dynamic transfer of liquid from that cavity to the cuvette where mixing of sample and reagent liquids and their photometric analysis take place. Dynamic loading of sample or reagent liquids is provided in another embodiment." ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
In @cite_32 , proposed an efficient symbolic model-checking algorithm for TCTL. However, the algorithm does not distinguish between Zeno and non-Zeno computations. Instead, the authors proposed to modify TAs with Zeno computations to ones without. In comparison, our greatest fixpoint evaluation algorithm is innately able to quantify over non-Zeno computations.
{ "cite_N": [ "@cite_32" ], "mid": [ "2167668895" ], "abstract": [ "Finite-state programs over real-numbered time in a guarded-command language with real-valued clocks are described. Model checking answers the question of which states of a real-time program satisfy a branching-time specification. An algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space, is given. >" ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
Several verification tools for TA have been devised and implemented so far @cite_8 @cite_24 @cite_2 @cite_34 @cite_14 @cite_16 @cite_28 @cite_21 @cite_6 @cite_35 @cite_9 . UPPAAL @cite_0 @cite_14 is one of the popular tool with DBM technology. It supports safety (reachability) analysis in forward reasoning techniques. Various state-space abstraction techniques and compact representation techniques have been developed @cite_31 @cite_4 . Recently, Moller has used UPPAAL with abstraction techniques to analyze restricted inevitability properties with no modal-formula nesting @cite_11 . The idea is to make model augmentations to speed up the verification performance. Moller also shows how to extend the idea to analyze TCTL with only universal quantifications. However, no experiment has been reported on the verification of nested modal-formulas.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_8", "@cite_28", "@cite_9", "@cite_21", "@cite_16", "@cite_6", "@cite_24", "@cite_0", "@cite_2", "@cite_31", "@cite_34", "@cite_11" ], "mid": [ "", "", "2145000217", "1874725288", "1519587690", "2080267935", "", "", "", "", "", "", "2290972968", "", "2160106331" ], "abstract": [ "", "", "During the past few years, a number of verification tools have been developed for real-time systems in the framework of timed automata (e.g. KRONOS and UPPAAL). One of the major problems in applying these tools to industrial-size systems is the huge memory-usage for the exploration of the state-space of a network (or product) of timed automata, as the model-checkers must keep information on not only the control structure of the automata but also the clock values specified by clock constraints. In this paper, we present a compact data structure for representing clock constraints. The data structure is based on an O(n sup 3 ) algorithm which, given a constraint system over real-valued variables consisting of bounds on differences, constructs an equivalent system with a minimal number of constraints. In addition, we have developed an on-the-fly, reduction technique to minimize the space-usage. Based on static analysis of the control structure of a network of timed automata, we are able to compute a set of symbolic states that cover all the dynamic loops of the network in an on-the-fly searching algorithm, and thus ensure termination in reachability analysis. The two techniques and their combination have been implemented in the tool UPPAAL. Our experimental results demonstrate that the techniques result in truly significant space-reductions: for six examples from the literature, the space saving is between 75 and 94 , and in (nearly) all examples time-performance is improved. Also noteworthy is the observation that the two techniques are completely orthogonal.", "Both approaches presented in this paper considerably improve Kronos performance and functionalities.", "Real-world real-time systems may involve many complex structures, which are difficult to verify. We experiment with the model-checking of an application-layer html-based web-camera which involves structures like event queues, high-layer communication channels, and time-outs. To contain the complexity, we implement our verification tool with a newly developed BDD-like data-structure, reduced CRD (Clock-Restriction Diagram), which has enhanced the verification performance through intensive data-sharing in a previous report. However, the representation of reduced CRD does not allow for quick test of zone containment. To this purpose, we thus have designed a new CRD-based representation, cascade CRD, which has given us enough performance enhancement to successfully verifying several implementations of the web-camera.", "In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.", "", "", "", "", "", "", "One of the major problems in applying automatic verication tools to industrial-size systems is the excessive amount of memory required during the state-space exploration of a model. In the setting of real-time, this problem of state-explosion requires extra attention as information must be kept not only on the discrete control structure but also on the values of continuous clock variables. In this paper, we present Clock Dierence Diagrams, CDD's, a BDD-like data-structure for representing and eectively manipulating certain non-convex subsets of the Euclidean space, notably those encountered during verication of timed automata. A version of the real-time verication tool Uppaal using CDD's as a compact datastructure for storing explored symbolic states has been implemented. Our experimental results demonstrate signicant space-savings: for 8 industrial examples, the savings are between 46 and 99 with moderate increase in runtime. We further report on how the symbolic state-space exploration itself may be carried out using CDD's.", "", "Abstract We present an approximation technique, that can render real-time model checking of safety and universal path properties more efficient. It is beneficial, when loops lead to repetition of control situations. Basically we augment a timed automata model with carefully selected extra transitions. This increases the size of the state-space, but potentially decreases the number of symbolic states to be explored by orders of magnitude. We give a formal definition of a timed automata formalism, enriched with basic data types, hand-shake synchronization, urgency, and committed locations. We prove by means of a trace semantics, that if a safety property can be established in the augmented model, it also holds for the original model. We extend our technique to a richer set of properties, that can be decided via a set of traces (universal path properties). In order for universal path properties to carry over to the original model, the semantics of the timed automata formalism is formulated relative to the applied augmentation. Our technique is particularly useful in systems, where a scheduler dictates repetition of control over elapsing time. As a typical example we mention translations of LEGO® RCX™ programs to U ppaal models, where the Round-Robin scheduler is a static entity. We allow scheduler and associated tasks to “park”, until some timing or environmental conditions are met. We apply our technique on a brick-sorter model for a safety property and report run-time data." ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
Kronos @cite_8 @cite_9 is a full TCTL model-checker with DBM technology and both forward and backward reasoning capability. Experiments to demonstrate how to use Kronos to verify several TCTL bounded inevitability properties is demonstrated in @cite_9 . ( Bouonded inevitabilities are those inevitabilities specified with a deadline.) But no report has been made on how to enhance the performance of general inevitability analysis. In comparison, we have discussed techniques like EDGF and abstractions which handle both bounded and unbounded inevitabilities.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2080267935", "1874725288" ], "abstract": [ "In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.", "Both approaches presented in this paper considerably improve Kronos performance and functionalities." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
The first framework to be reviewed is pure-pixel search in HU in remote sensing @cite_27 or separable NMF in machine learning @cite_45 . Both assume that for every @math , there exists an index @math such that [ i_k = . ] The above assumption is called the pure-pixel assumption in HU or separability assumption in separable NMF. Figure (a) illustrates the geometry of @math under the pure-pixel assumption, where we see that the pure pixels @math are the vertices of the convex hull @math . This suggests that some kind of vertex search can lead to recovery of @math ---the key insight of almost all algorithms in this framework. The beauty of pure-pixel search or separable NMF is that under the pure-pixel assumption, SSMF can be accomplished either via simple algorithms @cite_10 @cite_35 or via convex optimization @cite_32 @cite_19 @cite_36 @cite_41 @cite_9 . Also, as shown in the aforementioned references, some of these algorithms are supported by theoretical analyses in terms of guarantees on recovery accuracies.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_41", "@cite_9", "@cite_32", "@cite_19", "@cite_27", "@cite_45", "@cite_10" ], "mid": [ "2040399008", "2125126993", "2046793177", "1966872876", "2951734015", "1969013340", "2185164352", "1854811422", "2105617746" ], "abstract": [ "This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise-including its identification of the (unknown) number of endmembers-under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.", "We are interested in a low-rank matrix factorization problem where one of the matrix factors has a special structure; specifically, its columns live in the unit simplex. This problem finds applications in diverse areas such as hyperspectral unmixing, video summarization, spectrum sensing, and blind speech separation. Prior works showed that such a factorization problem can be formulated as a self-dictionary sparse optimization problem under some assumptions that are considered realistic in many applications, and convex mixed norms were employed as optimization surrogates to realize the factorization in practice. Numerical results have shown that the mixed-norm approach demonstrates promising performance. In this letter, we conduct performance analysis of the mixed-norm approach under noise perturbations. Our result shows that using a convex mixed norm can indeed yield provably good solutions. More importantly, we also show that using nonconvex mixed (quasi) norms is more advantageous in terms of robustness against noise.", "A collaborative convex framework for factoring a data matrix X into a nonnegative product AS , with a sparse coefficient matrix S, is proposed. We restrict the columns of the dictionary matrix A to coincide with certain columns of the data matrix X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l1, ∞ regularization to select the dictionary from the data and show that this leads to an exact convex relaxation of l0 in the case of distinct noise-free data. We also show how to relax the restriction-to-X constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in X. We focus on applications of the proposed framework to hyperspectral endmember and abundance identification and also show an application to blind source separation of nuclear magnetic resonance data.", "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.", "This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C such that X approximately equals CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of (2012). In contrast with this earlier work, the proposed method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.", "Although nonnegative matrix factorization (NMF) is NP-hard in general, it has been shown very recently that it is tractable under the assumption that the input nonnegative data matrix is close to being separable. (Separability requires that all columns of the input matrix belong to the cone spanned by a small subset of these columns.) Since then, several algorithms have been designed to handle this subclass of NMF problems. In particular, [Adv. Neural Inform. Process. Syst., 25 (2012), pp. 1223--1231] proposed a linear programming model, referred to as Hottopixx. In this paper, we provide a new and more general robustness analysis of their method. In particular, we design a provably more robust variant using a postprocessing strategy which allows us to deal with duplicates and near duplicates in the data set.", "", "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging --this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise --this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.", "Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
To give insights into how the geometry of the pure-pixel case can be utilized for SSMF, we briefly describe a pure-pixel search framework based on maximum volume inscribed simplex (MVIS) @cite_4 @cite_47 . The MVIS framework considers the following problem where we seek to find a simplex @math such that it is inscribed in the data convex hull @math and its volume is the maximum; see Figure for an illustration. Intuitively, it seems true that the vertices of the MVIS, under the pure-pixel assumption, should be @math . In fact, this can be shown to be valid: It should be noted that the above theorem also reveals that the MVIS cannot correctly recover @math for no-pure-pixel or non-separable problem instances. Readers are also referred to @cite_47 for details on how the MVIS problem is handled in practice.
{ "cite_N": [ "@cite_47", "@cite_4" ], "mid": [ "2050041778", "2157321686" ], "abstract": [ "Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods.", "Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
While MVES is appealing in its recovery guarantees, the pursuit of SSMF frameworks is arguably not over. The MVES problem is non-convex and NP-hard in general @cite_21 . Our numerical experience is that the convergence of an MVES algorithm to a good result could depend on the starting point. Hence, it is interesting to study alternative frameworks that can also go beyond the pure-pixel or separability case and can bring new perspective to the no-pure-pixel case---and this is the motivation for our development of the MVIE framework in the next section.
{ "cite_N": [ "@cite_21" ], "mid": [ "2026302893" ], "abstract": [ "The problem of finding a d -simplex of maximum volume in an arbitrary d -dimensional V -polytope, for arbitrary d , was shown by [GKL] in 1995 to be NP-hard. They conjectured that the corresponding problem for H -polytopes is also NP-hard. This paper presents a unified way of proving the NP-hardness of both these problems. The approach also yields NP-hardness proofs for the problems of finding d -simplices of minimum volume containing d -dimensional V - or H -polytopes. The polytopes that play the key role in the hardness proofs are truncations of simplices. A construction is presented which associates a truncated simplex to a given directed graph, and the hardness results follow from the hardness of detecting whether a directed graph has a partition into directed triangles." ] }
1708.02884
2951525085
The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best.
Size measurement is also well studied. Research investigating software size has shown that size can be used to assess productivity @cite_27 and defects @cite_25 in practice. @cite_18 have shown that simple metrics like lines of code are well suited to assess software complexity and maintainability in a similar environment. Hence, in this study, we consider size metrics as a powerful and established metric.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_25" ], "mid": [ "2134304939", "2077418553", "2120999344" ], "abstract": [ "Some current object-oriented analysis methods provide use cases. Scenarios or similar concepts to describe functional requirements for software systems. The authors introduced the use case point method to measure the size of large-scale software systems based on such requirements specifications. With the measured size and the measured effort the real productivity can be calculated in terms of delivered functionality. In the status report they summarize the experiences made with size metrics and productivity rates at a major Swiss banking institute. They analyzed the quality of requirements documents and the measured use case points in order to test and calibrate the use case point method. Experiences are based on empirical data of a productivity benchmark of 23 measured projects (quantitative analysis), 64 evaluated questionnaires of project members and 11 post benchmark interviews held with selected project managers (qualitative analysis).", "Context: Simulink models are used during software integration testing in the automotive domain on hardware in the loop (HIL) rigs. As the amount of software in cars is increasing continuously, the number of Simulink models for control logic and plant models is growing at the same time. Objective: The study aims for investigating the applicability of three approaches for evaluating model complexity in an industrial setting. Additionally, insights on the understanding of maintainability in industry are gathered. Method: Simulink models from two vehicle projects at a German premium car manufacturer are evaluated by applying the following three approaches: Assessing a model's (a) size, (b) structure, and (c) signal routing. Afterwards, an interview study is conducted followed by an on-site workshop in order to validate the findings. Results: The measurements of 65 models resulted in comparable data for the three measurement approaches. Together with the interview studies, conclusions were drawn on how well each approach reflects the experts' opinions. Additionally, it was possible to get insights on maintainability in an industrial setting. Conclusion: By analyzing the results, differences between the three measurement approaches were revealed. The interviews showed that the expert opinion tends to favor the results of the simple size measurements over the measurement including the signal routing.", "Defective software modules cause software failures, increase development and maintenance costs, and decrease customer satisfaction. Effective defect prediction models can help developers focus quality assurance activities on defect-prone modules and thus improve software quality by using resources more efficiently. These models often use static measures obtained from source code, mainly size, coupling, cohesion, inheritance, and complexity measures, which have been associated with risk factors, such as defects and changes." ] }
1708.02884
2951525085
The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best.
Regarding the field of mining software repositories, many studies focus on predicting defects like Zimmermann @cite_19 . Other studies have used software repositories to investigate refactoring practices (cf. @cite_5 ). However, to the best knowledge of the authors, no studies have been reported so far combining aspects of software repository mining and measurements with prediction approaches including machine learning approaches that are evaluated in a realistic industrial context.
{ "cite_N": [ "@cite_19", "@cite_5" ], "mid": [ "2125346455", "2113157806" ], "abstract": [ "Software development results in a huge amount of data: changes to source code are recorded in version archives, bugs are reported to issue tracking systems, and communications are archived in e-mails and newsgroups. We present techniques for mining version archives and bug databases to understand and support software development. First, we introduce the concept of co-addition of method calls, which we use to identify patterns that describe how methods should be called. We use dynamic analysis to validate these patterns and identify violations. The co-addition of method calls can also detect cross-cutting changes, which are an indicator for concerns that could have been realized as aspects in aspect-oriented programming. Second, we present techniques to build models that can successfully predict the most defect-prone parts of large-scale industrial software, in our experiments Windows Server 2003. This helps managers to allocate resources for quality assurance to those parts of a system that are expected to have most defects. The proposed measures on dependency graphs outperformed traditional complexity metrics. In addition, we found empirical evidence for a domino effect, i.e., depending on defect-prone binaries increases the chances of having defects.", "Refactoring is widely practiced by developers, and considerable research and development effort has been invested in refactoring tools. However, little has been reported about the adoption of refactoring tools, and many assumptions about refactoring practice have little empirical support. In this paper, we examine refactoring tool usage and evaluate some of the assumptions made by other researchers. To measure tool usage, we randomly sampled code changes from four Eclipse and eight Mylyn developers and ascertained, for each refactoring, if it was performed manually or with tool support. We found that refactoring tools are seldom used: 11 percent by Eclipse developers and 9 percent by Mylyn developers. To understand refactoring practice at large, we drew from a variety of data sets spanning more than 39,000 developers, 240,000 tool-assisted refactorings, 2,500 developer hours, and 12,000 version control commits. Using these data, we cast doubt on several previously stated assumptions about how programmers refactor, while validating others. Finally, we interviewed the Eclipse and Mylyn developers to help us understand why they did not use refactoring tools and to gather ideas for future research." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
In 2012, @cite_11 released a dataset containing 20,000 sketches distributed over 250 object categories. These sketches are drawn from daily objects, but humans can only correctly identify the sketch category with the accuracy 73 Hand-crafted features share similar spirit with image classification methods, including feature extraction and classification. Most of the existing works regard sketch as texture image, such as HOG and SIFT @cite_22 @cite_23 . @cite_11 , SIFT feature are extracted in image patches. The method also takes into account that sketches do not have smooth gradients and are much sparser than images. Similarly, @cite_1 leverage Fisher Vectors and spatial pyramid pooling to represent SIFT features. @cite_16 employ multiple-kernel learning (MKL) to learn appropriate weights of different features. The method improve the performance a lot as different features are complementary to each other.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_23", "@cite_16", "@cite_11" ], "mid": [ "2161969291", "", "2151103935", "2048568806", "1972420097" ], "abstract": [ "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Free-hand sketch recognition has become increasingly popular due to the recent expansion of portable touchscreen devices. However, the problem is non-trivial due to the complexity of internal structures that leads to intra-class variations, coupled with the sparsity in visual cues that results in inter-class ambiguities. In order to address the structural complexity, a novel structured representation for sketches is proposed to capture the holistic structure of a sketch. Moreover, to overcome the visual cue sparsity problem and therefore achieve state-of-the-art recognition performance, we propose a Multiple Kernel Learning (MKL) framework for sketch recognition, fusing several features common to sketches. We evaluate the performance of all the proposed techniques on the most diverse sketch dataset to date (, 2012), and offer detailed and systematic analyses of the performance of different features and representations, including a breakdown by sketch-super-category. Finally, we investigate the use of attributes as a high-level feature for sketches and show how this complements low-level features for improving recognition performance under the MKL framework, and consequently explore novel applications such as attribute-based retrieval.", "Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73 of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56 accuracy (chance is 0.4 ). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
Recently, deep neural networks (DNNs) have achieved great success @cite_2 by replacing hand-crafted representation with learning strategy. @cite_30 @cite_21 leverage a well-designed convolutional neural network(CNN) architecture for sketch recognition.According to their experimental results, their method surpass the best result achieved by human. According to their experimental results, their method surpass the best result achieved by human.@cite_12 @cite_1 , the sequential information of sketch is exploited.. @cite_25 explicitly uses sequential regularities in strokes with the help of recurrent neural work (RNN).
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_1", "@cite_2", "@cite_25", "@cite_12" ], "mid": [ "2950166981", "2493181180", "", "", "2949877834", "2013585293" ], "abstract": [ "We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.", "We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from.", "", "", "Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects.", "Computational support for sketching is an exciting research area at the intersection of design research, human–computer interaction, and artificial intelligence. Despite the prevalence of software tools, most designers begin their work with physical sketches. Modern computational tools largely treat design as a linear process beginning with a specific problem and ending with a specific solution. Sketch-based design tools offer another approach that may fit design practice better. This review surveys literature related to such tools. First, we describe the practical basis of sketching — why people sketch, what significance it has in design and problem solving, and the cognitive activities it supports. Second, we survey computational support for sketching, including methods for performing sketch recognition and managing ambiguity, techniques for modeling recognizable elements, and human–computer interaction techniques for working with sketches. Last, we propose challenges and opportunities for future advances in this field." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
CNNs and RNNs are two branches of neural network. CNNs have obtained great success in many practical application @cite_2 @cite_27 . LeNet @cite_7 is a classical CNN architecture, which has been used in handwritten number recognition. Wang @cite_8 use Siamese network to retrieve 3D model by sketch. However, all the above methods take sketches as traditional image and ignore the sparse of sketches.
{ "cite_N": [ "@cite_8", "@cite_27", "@cite_7", "@cite_2" ], "mid": [ "2952320381", "1686810756", "2154579312", "" ], "abstract": [ "Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of \"best views\" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the \"best views\" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of \"best views\" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.", "" ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
In order to exploit appropriate learning architecture for sketch recognition, @cite_30 @cite_21 develop a network named as Sketch-A-Net (SAN), which enlarges the pooling sizes and patches of filters to cater the sparse character of sketches. Different from traditional images, sketches involve inherent sequential property. @cite_30 and @cite_21 divide the strokes in several groups according to the strokes order. However, CNNs can not build connections between sequential strokes.
{ "cite_N": [ "@cite_30", "@cite_21" ], "mid": [ "2950166981", "2493181180" ], "abstract": [ "We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.", "We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
The sequential property is not proprietary for sketches. Many works resort to RNN to improve the performance in speech recognition @cite_18 and text generation @cite_28 . RNN is specialized for processing input sequence, which can bridge the hidden units and deliver the outputs from former sequence to the latter. However, it has a significant limitation called 'vanishing gradient'. When the input sequence is quite long, RNN is difficult to propagate the gradients through deep layers of the neural network, which is easy to cause gradients vanishing and exploding problems @cite_3 . In order to overcome the limitation of RNN, long short term memory (LSTM) @cite_3 and gated recurrent unit (GRU) @cite_4 are proposed. GRU can be regarded as a light-weight version of LSTM, which outperforms LSTM in some certain cases by learning smaller number of parameters @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_3", "@cite_10" ], "mid": [ "2115730999", "2950635152", "196214544", "", "1924770834" ], "abstract": [ "In this paper, we show how new training principles and optimization techniques for neural networks can be used for different network structures. In particular, we revisit the Recurrent Neural Network (RNN), which explicitly models the Markovian dynamics of a set of observations through a non-linear function with a much larger hidden state space than traditional sequence models such as an HMM. We apply pretraining principles used for Deep Neural Networks (DNNs) and second-order optimization techniques to train an RNN. Moreover, we explore its application in the Aurora2 speech recognition task under mismatched noise conditions using a Tandem approach. We observe top performance on clean speech, and under high noise conditions, compared to multi-layer perceptrons (MLPs) and DNNs, with the added benefit of being a “deeper” model than an MLP but more compact than a DNN.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.", "", "In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
Sarvadevabhatla @cite_25 take orders of strokes as a sequence and feed their features to GRU. In this way, a long-term sequential and structural regularities of stoke can be exploited. As a result, they achieve the best recognition performance. However, stroke order varies severely in the same kind, which results in the fluctuation of the network.
{ "cite_N": [ "@cite_25" ], "mid": [ "2949877834" ], "abstract": [ "Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects." ] }
1708.02760
2743858606
The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.
. The goal of image captioning is to automatically generate natural language description of images @cite_33 . The CNN-LSTM framework has been commonly adopted and shows good performance @cite_25 @cite_17 @cite_0 @cite_44 @cite_18 . Xu al @cite_55 introduce attention mechanism to exploit spatial information from image context. Krishna al @cite_19 incorporate object detection @cite_6 to generate descriptions for dense regions. Jia al @cite_35 extracts semantic information from images as extra guide to caption generation. Krause al @cite_60 uses hierarchical RNN to generates entire paragraphs to describe images, which is more descriptive than single sentence caption. In contrast to these studies, we are interested in generating a question rather than a caption to distinguish two objects in images.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_60", "@cite_55", "@cite_6", "@cite_0", "@cite_44", "@cite_19", "@cite_25", "@cite_17" ], "mid": [ "2220981600", "2952322859", "2949769367", "2549599535", "2950178297", "2953106684", "1811254738", "2951912364", "2277195237", "2951183276", "2951805548" ], "abstract": [ "In this work we focus on the problem of image caption generation. We propose an extension of the long short term memory (LSTM) model, which we coin gLSTM for short. In particular, we add semantic information extracted from the image as extra input to each unit of the LSTM block, with the aim of guiding the model towards solutions that are more tightly coupled to the image content. Additionally, we explore different length normalization strategies for beam search to avoid bias towards short sentences. On various benchmark datasets such as Flickr8K, Flickr30K and MS COCO, we obtain results that are on par with or better than the current state-of-the-art.", "Much recent progress in Vision-to-Language problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we first propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We further show that the same mechanism can be used to incorporate external knowledge, which is critically important for answering high level visual questions. Specifically, we design a visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain a complete answer. Our final model achieves the best reported results on both image captioning and visual question answering on several benchmark datasets.", "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.", "Recent progress on image captioning has made it possible to generate novel sentences describing images in natural language, but compressing an image into a single sentence can describe visual content in only coarse detail. While one new captioning approach, dense captioning, can potentially describe images in finer levels of detail by captioning many regions within an image, it in turn is unable to produce a coherent story for an image. In this paper we overcome these limitations by generating entire paragraphs for describing images, which can tell detailed, unified stories. We develop a model that decomposes both images and paragraphs into their constituent parts, detecting semantic regions in images and using a hierarchical recurrent neural network to reason about language. Linguistic analysis confirms the complexity of the paragraph generation task, and thorough experiments on a new dataset of image and paragraph pairs demonstrate the effectiveness of our approach.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations." ] }
1708.02760
2743858606
The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.
. A closely related task to VDQG is REG, where the model is required to generate unambiguous object descriptions. Referring expression has been studied in Natural Language Processing (NLP) @cite_43 @cite_22 @cite_52 . Kazemzadeh al @cite_27 introduce the first large-scale dataset for the REG in real-world scenes. They use images from the ImageCLEF dataset @cite_49 , and collect referring expression annotations by developing a ReferIt game. The authors of @cite_16 @cite_14 build two larger REG datasets by using similar approaches on top of MS COCO @cite_28 . CNN-LSTM model has been shown effective in both generation @cite_16 @cite_14 and comprehension @cite_29 @cite_59 of REG. Mao al @cite_16 introduce a discriminative loss function based on Maximum Mutual Information. Yu al @cite_14 study the usage of context in REG task. Yu al @cite_41 propose a speaker-listener-reinforcer framework for REG, which is end-to-end trainable by reinforcement learning.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_41", "@cite_29", "@cite_52", "@cite_43", "@cite_27", "@cite_49", "@cite_59", "@cite_16" ], "mid": [ "2949107813", "", "", "2571175805", "2963735856", "", "1533917153", "2251512949", "2006147162", "", "2144960104" ], "abstract": [ "Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring expression generation and comprehension.", "", "", "Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer&#x2019;s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.", "In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.", "", "Language is sensitive to both semantic and pragmatic effects. To capture both effects, we model language use as a cooperative game between two players: a speaker, who generates an utterance, and a listener, who responds with an action. Specifically, we consider the task of generating spatial references to objects, wherein the listener must accurately identify an object described by the speaker. We show that a speaker model that acts optimally with respect to an explicit, embedded listener model substantially outperforms one that is trained to directly generate spatial descriptions.", "In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.", "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.", "", "We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox." ] }
1708.02861
2745029067
Due to the difficulty of coordination in multi-cell random access, it is a practical challenge how to achieve the optimal throughput with decentralized transmission. In this paper, we propose a decentralized multi-cell-aware opportunistic random access (MA-ORA) protocol that achieves the optimal throughput scaling in an ultra-dense @math -cell random access network with one access point (AP) and @math users in each cell, which is suited for machine-type communications. Unlike opportunistic scheduling for cellular multiple access where users are selected by base stations, under our MA-ORA protocol, each user opportunistically transmits with a predefined physical layer data rate in a decentralized manner if the desired signal power to the serving AP is sufficiently large and the generating interference leakage power to the other APs is sufficiently small (i.e., two threshold conditions are fulfilled). As a main result, it is proved that the aggregate throughput scales as @math in a high signal-to-noise ratio (SNR) regime if @math scales faster than @math for small constants @math . Our analytical result is validated by computer simulations. In addition, numerical evaluation confirms that under a practical setting, the proposed MA-ORA protocol outperforms conventional opportunistic random access protocols in terms of throughput.
On the one hand, there are extensive studies on handling interference management of cellular networks with multiple base stations @cite_31 @cite_44 . While it has been elusive to find the optimal strategy with respect to the Shannon-theoretic capacity in multiuser cellular networks, interference alignment (IA) was recently proposed for fundamentally solving the interference problem when there are multiple communication pairs @cite_41 . It was shown that IA can asymptotically achieve the optimal degrees of freedom, which are equal to @math , in the @math -user interference channel with time-varying coefficients. Subsequent work showed that interference management schemes based on IA can be well applicable to various communication scenarios @cite_35 @cite_18 @cite_43 , including interfering multiple access networks @cite_42 @cite_22 @cite_48 . In addition to the multiple access scenarios in which collisions can be avoided, it is of significant importance how to manage interference in . For multi-cell random access networks, several studies were carried out to manage interference by performing IA @cite_34 @cite_7 @cite_6 or successive interference cancellation (SIC) @cite_40 @cite_36 . In @cite_11 @cite_13 , decentralized power allocation approaches were introduced by means of interference mitigation for random access with capabilities of multi-packet reception and SIC at the receiver.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_7", "@cite_41", "@cite_48", "@cite_36", "@cite_42", "@cite_6", "@cite_44", "@cite_43", "@cite_40", "@cite_31", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2012454404", "", "", "", "1979408141", "", "", "2533153119", "", "", "", "1997341747", "2156315971", "2146102702", "", "2190095468" ], "abstract": [ "We provide inner bound and outer bound for the total number of degrees of freedom of the K user multiple-input multiple-output (MIMO) Gaussian interference channel with M antennas at each transmitter and N antennas at each receiver if the channel coefficients are time-varying and drawn from a continuous distribution. The bounds are tight when the ratio [(max(M,N)) (min(M,N))]=R is equal to an integer. For this case, we show that the total number of degrees of freedom is equal to min(M,N)K if K ≤ R and min(M,N)[(R) (R+1)]K if K > R. Achievability is based on interference alignment. We also provide examples where using interference alignment combined with zero forcing can achieve more degrees of freedom than merely zero forcing for some MIMO interference channels with constant channel coefficients.", "", "", "", "For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K 2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K 2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also provided of fully connected K user interference channels with constant (not time-varying) coefficients where the capacity is exactly achieved by interference alignment at all SNR values.", "", "", "In this paper, we propose a new way of interference management for cellular networks. We develop the scheme that approaches to interference-free degree-of-freedom (dof) as the number K of users in each cell increases. Also we find the corresponding bandwidth scaling conditions for typical wireless channels: multi-path channels and single-path channels with propagation delay. The scheme is based on interference alignment. Especially for more-than-two-cell cases where there are multiple non-intended BSs, we propose a new version of interference alignment, namely subspace interference alignment. The idea is to align interferences into multi-dimensional subspace (instead of one dimension) for simultaneous alignments at multiple non-intended BSs. The proposed scheme requires finite dimensions growing linearly with K, i.e., O(K).", "", "", "", "We introduce a framework to study slotted Aloha with cooperative base stations. Assuming a geographic-proximity communication model, we propose several decoding algorithms with different degrees of base stations' cooperation (non-cooperative, spatial, temporal, and spatio-temporal). With spatial cooperation, neighboring base stations inform each other whenever they collect a user within their coverage overlap; temporal cooperation corresponds to (temporal) successive interference cancellation done locally at each station. We analyze the four decoding algorithms and establish several fundamental results. With all algorithms, the peak throughput (average number of decoded users per slot, across all base stations) increases linearly with the number of base stations. Further, temporal and spatio-temporal cooperations exhibit a threshold behavior with respect to the normalized load (number of users per station, per slot). There exists a positive load @math , such that, below @math , the decoding probability is asymptotically maximal possible, equal the probability that a user is heard by at least one base station; with non-cooperative decoding and spatial cooperation, we show that @math is zero. Finally, with spatio-temporal cooperation, we optimize the degree distribution according to which users transmit their packet replicas; the optimum is in general very different from the corresponding optimal distribution of the single-base station system.", "We obtain Shannon-theoretic limits for a very simple cellular multiple-access system. In our model the received signal at a given cell site is the sum of the signals transmitted from within that cell plus a factor spl alpha (0 spl les spl alpha spl les 1) times the sum of the signals transmitted from the adjacent cells plus ambient Gaussian noise. Although this simple model is scarcely realistic, it nevertheless has enough meat so that the results yield considerable insight into the workings of real systems. We consider both a one dimensional linear cellular array and the familiar two-dimensional hexagonal cellular pattern. The discrete-time channel is memoryless. We assume that N contiguous cells have active transmitters in the one-dimensional case, and that N sup 2 contiguous cells have active transmitters in the two-dimensional case. There are K transmitters per cell. Most of our results are obtained for the limiting case as N spl rarr spl infin . The results include the following. (1) We define C sub N ,C spl circ sub N as the largest achievable rate per transmitter in the usual Shannon-theoretic sense in the one- and two-dimensional cases, respectively (assuming that all signals are jointly decoded). We find expressions for limN spl rarr spl infin C sub N and limN spl rarr spl infin C spl circ sub N . (2) As the interference parameter spl alpha increases from 0, C sub N and C spl circ sub N increase or decrease according to whether the signal-to-noise ratio is less than or greater than unity. (3) Optimal performance is attainable using TDMA within the cell, but using TDMA for adjacent cells is distinctly suboptimal. (4) We suggest a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal. >", "The throughput of existing MIMO LANs is limited by the number of antennas on the AP. This paper shows how to overcome this limit. It presents interference alignment and cancellation (IAC), a new approach for decoding concurrent sender-receiver pairs in MIMO networks. IAC synthesizes two signal processing techniques, interference alignment and interference cancellation, showing that the combination applies to scenarios where neither interference alignment nor cancellation applies alone. We show analytically that IAC almost doubles the throughput of MIMO LANs. We also implement IAC in GNU-Radio, and experimentally demonstrate that for 2x2 MIMO LANs, IAC increases the average throughput by 1.5x on the downlink and 2x on the uplink.", "", "This letter proposes a decentralized power allocation approach for random access with capabilities of multi-packet reception (MPR) and successive interference cancellation (SIC). Considering specific features of SIC, a bottom-up per-level algorithm is proposed to obtain discrete transmission power levels and the corresponding probabilities. Comparing with the conventional power allocation scheme with MPR and SIC, our approach significantly improves the system sum rate; this is confirmed by computer simulations." ] }
1708.03020
2963492344
We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint. We propose an @math -variation functional to quantify the change, which yields less variation for dynamic function sequences whose changes are constrained to short time periods or small subsets of input domain. Under the @math -variation constraint, we derive both upper and matching lower regret bounds for smooth and strongly convex function sequences, which generalize previous results in (2015). Furthermore, we provide an upper bound for general convex function sequences with noisy gradient feedback, which matches the optimal rate as @math . Our results reveal some surprising phenomena under this general variation functional, such as the curse of dimensionality of the function domain. The key technical novelties in our analysis include affinity lemmas that characterize the distance of the minimizers of two convex functions with bounded Lp difference, and a cubic spline based construction that attains matching lower bounds.
Bandit convex optimization is a combination of stochastic optimization and online convex optimization, where the stationary benchmark in hindsight of a sequence of arbitrary convex functions @math is used to evaluate regrets. At each time @math , only the function evaluation at the queried point @math (or its noisy version) is revealed to the learning algorithm. Despite its similarity to stochastic and or online convex optimization, convex bandits are considerably harder due to its lack of first-order information and the arbitrary change of functions. @cite_4 proposed a novel finite-difference gradient estimator, which was adapted by @cite_6 to an ellipsoidal gradient estimator that achieves @math regret for constrained smooth and strongly convex bandits problems. For the non-smooth and non-strongly convex bandits problem, the recent work of @cite_10 attains @math regret with an explicit algorithm whose regret and running time both depend polynomially on dimension @math .
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_6" ], "mid": [ "2473549844", "2952840318", "2151056989" ], "abstract": [ "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).", "Bandit Convex Optimization (BCO) is a fundamental framework for decision making under uncertainty, which generalizes many problems from the realm of online and statistical learning. While the special case of linear cost functions is well understood, a gap on the attainable regret for BCO with nonlinear losses remains an important open question. In this paper we take a step towards understanding the best attainable regret bounds for BCO: we give an efficient and near-optimal regret algorithm for BCO with strongly-convex and smooth loss functions. In contrast to previous works on BCO that use time invariant exploration schemes, our method employs an exploration scheme that shrinks with time." ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
Parameters in two architectures are if they have some learnable tensor in common. An alignment across architectures implies how tasks are related, and how much they are related. The goal of DMTL is to improve performance across tasks through joint training of aligned architectures, exploiting inter-task regularities. In recent years, DMTL has been applied within areas such as vision @cite_62 @cite_2 @cite_52 @cite_5 @cite_45 @cite_26 , natural language @cite_46 @cite_36 @cite_60 @cite_67 @cite_0 , speech @cite_43 @cite_65 @cite_63 , and reinforcement learning @cite_61 @cite_23 @cite_25 . The rest of this section reviews existing DMTL methods, showing that none of these methods satisfy both conditions (1) and (2).
{ "cite_N": [ "@cite_61", "@cite_67", "@cite_62", "@cite_26", "@cite_60", "@cite_36", "@cite_65", "@cite_52", "@cite_0", "@cite_43", "@cite_45", "@cite_63", "@cite_2", "@cite_23", "@cite_5", "@cite_46", "@cite_25" ], "mid": [ "2964262254", "", "2964056935", "", "", "", "", "", "", "2407793339", "2407277018", "", "", "", "", "2117130368", "" ], "abstract": [ "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.", "", "Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for all perceptual problems together, solving them efficiently and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call multinet, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.", "", "", "", "", "", "", "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2 relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7 WERR with 40 adaptation utterances against the un-adapted DNN models.", "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.", "", "", "", "", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "" ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
The classical approach to DMTL considers a joint model across tasks in which some aligned layers are shared completely across tasks, and the remaining layers remain task-specific @cite_1 . In practice, the most common approach is to share all layers except for the final classification layers @cite_61 @cite_22 @cite_50 @cite_43 @cite_39 @cite_13 @cite_47 @cite_3 @cite_24 . A more flexible approach is to not share parameters exactly across shared layers, but to factorize layer parameters into shared and task-specific factors @cite_31 @cite_14 @cite_27 @cite_54 @cite_7 @cite_45 . Such approaches work for any set of architectures that have a known set of aligned layers. However, these methods only apply when such alignment is known . That is, they do not meet condition (2).
{ "cite_N": [ "@cite_61", "@cite_31", "@cite_14", "@cite_22", "@cite_7", "@cite_54", "@cite_1", "@cite_3", "@cite_39", "@cite_24", "@cite_43", "@cite_27", "@cite_45", "@cite_50", "@cite_47", "@cite_13" ], "mid": [ "2964262254", "2065180801", "", "2251743902", "2964344823", "", "2963199420", "2094035326", "2551887912", "1896424170", "2407793339", "", "2407277018", "2025198378", "2290180618", "2295072214" ], "abstract": [ "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.", "We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks.", "", "In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.", "Abstract: In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.", "", "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a \"distilled\" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.", "In this paper we demonstrate how to improve the performance of deep neural network (DNN) acoustic models using multi-task learning. In multi-task learning, the network is trained to perform both the primary classification task and one or more secondary tasks using a shared representation. The additional model parameters associated with the secondary tasks represent a very small increase in the number of trained parameters, and can be discarded at runtime. In this paper, we explore three natural choices for the secondary task: the phone label, the phone context, and the state context. We demonstrate that, even on a strong baseline, multi-task learning can provide a significant decrease in error rate. Using phone context, the phonetic error rate (PER) on TIMIT is reduced from 21.63 to 20.25 on the core test set, and surpassing the best performance in the literature for a DNN that uses a standard feed-forward network architecture.", "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.", "Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].", "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2 relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7 WERR with 40 adaptation utterances against the un-adapted DNN models.", "", "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.", "In the deep neural network (DNN), the hidden layers can be considered as increasingly complex feature transformations and the final softmax layer as a log-linear classifier making use of the most abstract features computed in the hidden layers. While the loglinear classifier should be different for different languages, the feature transformations can be shared across languages. In this paper we propose a shared-hidden-layer multilingual DNN (SHL-MDNN), in which the hidden layers are made common across many languages while the softmax layers are made language dependent. We demonstrate that the SHL-MDNN can reduce errors by 3-5 , relatively, for all the languages decodable with the SHL-MDNN, over the monolingual DNNs trained using only the language specific data. Further, we show that the learned hidden layers sharing across languages can be transferred to improve recognition accuracy of new languages, with relative error reductions ranging from 6 to 28 against DNNs trained without exploiting the transferred hidden layers. It is particularly interesting that the error reduction can be achieved for the target language that is in different families of the languages used to learn the hidden layers.", "We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.", "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation." ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
One approach to overcome the alignment problem is to design an entirely new architecture that integrates information from different tasks and is maximally shared across tasks @cite_62 @cite_21 @cite_53 . Such an approach can even be used to share knowledge across disparate modalities @cite_53 . However, by disregarding task-specific architectures, this approach does not meet condition (1). Related approaches attempts to learn how to assemble a set of shared modules in different ways to solve different tasks, whether by gradient descent @cite_16 , reinforcement learning @cite_28 , or evolutionary architecture search @cite_8 . These methods also construct new architectures, so they do not meet condition (1); however, they have shown that including a small number of location-specific parameters is crucial to sharing functionality across diverse locations.
{ "cite_N": [ "@cite_62", "@cite_8", "@cite_28", "@cite_53", "@cite_21", "@cite_16" ], "mid": [ "2964056935", "2963216850", "2963393838", "2626792426", "2556468274", "2963704251" ], "abstract": [ "Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for all perceptual problems together, solving them efficiently and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call multinet, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.", "Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general.", "Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85 average reduction in training time.", "Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.", "Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce such a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. All layers include shortcut connections to both word representations and lower-level task predictions. We use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end trainable model obtains state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment. It also performs competitively on POS tagging. Our dependency parsing layer relies only on a single feed-forward pass and does not require a beam search.", "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task." ] }
1906.00110
2947039732
The Price of Anarchy (PoA) is a well-established game-theoretic concept to shed light on coordination issues arising in open distributed systems. Leaving agents to selfishly optimize comes with the risk of ending up in sub-optimal states (in terms of performance and or costs), compared to a centralized system design. However, the PoA relies on strong assumptions about agents' rationality (e.g., resources and information) and interactions, whereas in many distributed systems agents interact locally with bounded resources. They do so repeatedly over time (in contrast to "one-shot games"), and their strategies may evolve. Using a more realistic evolutionary game model, this paper introduces a realized evolutionary Price of Anarchy (ePoA). The ePoA allows an exploration of equilibrium selection in dynamic distributed systems with multiple equilibria, based on local interactions of simple memoryless agents. Considering a fundamental game related to virus propagation on networks, we present analytical bounds on the ePoA in basic network topologies and for different strategy update dynamics. In particular, deriving stationary distributions of the stochastic evolutionary process, we find that the Nash equilibria are not always the most abundant states, and that different processes can feature significant off-equilibrium behavior, leading to a significantly higher ePoA compared to the PoA studied traditionally in the literature.
While in many text book examples, Nash equilibria are typically bad'' and highlight a failure of cooperation (e.g., Prisoner's Dilemma, tragedy of the commons, etc.), research shows that in many application domains, even the worst game-theoretic equilibria are often fairly close to the optimal outcome @cite_18 . However, these examples also have in common that users have information about the game. While the study of information games also has a long tradition @cite_0 , much less is known today about the price of anarchy in games with partial information based on local interactions. In general, it is believed that similar extensions are challenging @cite_18 . An interesting line of recent work initiated the study of Bayes-Nash equilibria @cite_36 in games of incomplete information (e.g., @cite_36 ), and in particular the Bayes-Nash Price of Anarchy @cite_18 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_36" ], "mid": [ "2139774323", "2136124238", "2132467626" ], "abstract": [ "(This article originally appeared in Management Science, November 1967, Volume 14, Number 3, pp. 159-182, published by The Institute of Management Sciences.) The paper develops a new theory for the analysis of games with incomplete information where the players are uncertain about some important parameters of the game situation, such as the payoff functions, the strategies available to various players, the information other players have about the game, etc. However, each player has a subjective probability distribution over the alternative possibilities. In most of the paper it is assumed that these probability distributions entertained by the different players are mutually \"consistent,\" in the sense that they can be regarded as conditional probability distributions derived from a certain \"basic probability distribution\" over the parameters unknown to the various players. But later the theory is extended also to cases where the different players' subjective probability distributions fail to satisfy this consistency assumption. In cases where the consistency assumption holds, the original game can be replaced by a game where nature first conducts a lottery in accordance with the basic probability distribution, and the outcome of this lottery will decide which particular subgame will be played, i.e., what the actual values of the relevant parameters will be in the game. Yet, each player will receive only partial information about the outcome of the lottery, and about the values of these parameters. However, every player will know the \"basic probability distribution\" governing the lottery. Thus, technically, the resulting game will be a game with complete information. It is called the Bayes-equivalent of the original game. Part I of the paper describes the basic model and discusses various intuitive interpretations for the latter. Part II shows that the Nash equilibrium points of the Bayes-equivalent game yield \"Bayesian equilibrium points\" for the original game. Finally, Part III considers the main properties of the \"basic probability distribution.\"", "We define smooth games of incomplete information. We prove an \"extension theorem\" for such games: price of anarchy bounds for pure Nash equilibria for all induced full-information games extend automatically, without quantitative degradation, to all mixed-strategy Bayes-Nash equilibria with respect to a product prior distribution over players' preferences. We also note that, for Bayes-Nash equilibria in games with correlated player preferences, there is no general extension theorem for smooth games. We give several applications of our definition and extension theorem. First, we show that many games of incomplete information for which the price of anarchy has been studied are smooth in our sense. Thus our extension theorem unifies much of the known work on the price of anarchy in games of incomplete information. Second, we use our extension theorem to prove new bounds on the price of anarchy of Bayes-Nash equilibria in congestion games with incomplete information.", "We provide efficient algorithms for finding approximate Bayes-Nash equilibria (BNE) in graphical, specifically tree, games of incomplete information. In such games an agent's payoff depends on its private type as well as on the actions of the agents in its local neighborhood in the graph. We consider two classes of such games: (1) arbitrary tree-games with discrete types, and (2) tree-games with continuous types but with constraints on the effect of type on payoffs. For each class we present a message passing on the game-tree algorithm that computes an e-BNE in time polynomial in the number of agents and the approximation parameter 1 ." ] }
1906.00110
2947039732
The Price of Anarchy (PoA) is a well-established game-theoretic concept to shed light on coordination issues arising in open distributed systems. Leaving agents to selfishly optimize comes with the risk of ending up in sub-optimal states (in terms of performance and or costs), compared to a centralized system design. However, the PoA relies on strong assumptions about agents' rationality (e.g., resources and information) and interactions, whereas in many distributed systems agents interact locally with bounded resources. They do so repeatedly over time (in contrast to "one-shot games"), and their strategies may evolve. Using a more realistic evolutionary game model, this paper introduces a realized evolutionary Price of Anarchy (ePoA). The ePoA allows an exploration of equilibrium selection in dynamic distributed systems with multiple equilibria, based on local interactions of simple memoryless agents. Considering a fundamental game related to virus propagation on networks, we present analytical bounds on the ePoA in basic network topologies and for different strategy update dynamics. In particular, deriving stationary distributions of the stochastic evolutionary process, we find that the Nash equilibria are not always the most abundant states, and that different processes can feature significant off-equilibrium behavior, leading to a significantly higher ePoA compared to the PoA studied traditionally in the literature.
More specifically, we revisited 's virus inoculation game @cite_32 in this paper. Traditional virus propagation models studied virus infection in terms of birth and death rates of viruses @cite_2 @cite_16 as well as through local interactions on networks @cite_6 @cite_3 @cite_43 (e.g., the Internet @cite_11 @cite_1 ). An interesting study is due to Montanari and Saberi @cite_15 who compare game-theoretic and epidemic models, investigating differences between the viral behavior of the spread of viruses, new technologies, and new political or social beliefs.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_43", "@cite_2", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "", "2152769037", "2108050790", "2153692272", "2140697769", "114870970", "2142897866", "", "2123279152" ], "abstract": [ "", "We propose a simple game for modeling containment of the spread of viruses in a graph of n nodes. Each node must choose to either install anti-virus software at some known cost C, or risk infection and a loss L if a virus that starts at a random initial point in the graph can reach it without being stopped by some intermediate node. The goal of individual nodes is to minimize their individual expected cost. We prove many game theoretic properties of the model, including an easily applied characterization of Nash equilibria, culminating in our showing that allowing selfish users to choose Nash equilibrium strategies is highly undesirable, because the price of anarchy is an unacceptable Θ(n) in the worst case. This shows in particular that a centralized solution can give a much better total cost than an equilibrium solution. Though it is NP-hard to compute such a social optimum, we show that the problem can be reduced to a previously unconsidered combinatorial problem that we call the sum-of-squares partition problem. Using a greedy algorithm based on sparse cuts, we show that this problem can be approximated to within a factor of O(log2 n), giving the same approximation ratio for the inoculation game.", "To understand the current extent of the computer virus problem and predict its future course, the authors have conducted a statistical analysis of computer virus incidents in a large, stable sample population of PCs and developed new epidemiological models of computer virus spread. Only a small fraction of all known viruses have appeared in real incidents, partly because many viruses are below the theoretical epidemic threshold. The observed sub-exponential rate of viral spread can be explained by models of localized software exchange. A surprisingly small fraction of machines in well-protected business environments are infected. This may be explained by a model in which, once a machine is found to be infected, neighboring machines are checked for viruses. This kill signal idea could be implemented in networks to greatly reduce the threat of viral spread. A similar principle has been incorporated into a cost-effective anti-virus policy for organizations which works quite well in practice. >", "Analogies with biological disease with topological considerations added, which show that the spread of computer viruses can be contained, and the resulting epidemiological model are examined. The findings of computer virus epidemiology show that computer viruses are far less rife than many have claimed, that many fail to thrive, that even successful viruses spread at nowhere near the exponential rate that some have claimed, and that centralized reporting and response within an organization is an extremely effective defense. A case study is presented, and some steps for companies to take are suggested. >", "Viruses remain a significant threat to modern networked computer systems. Despite the best efforts of those who develop anti-virus systems, new viruses and new types of virus that are not dealt with by existing protection schemes appear regularly. In addition, the rate at which a virus can spread has risen dramatically with the increase in connectivity. Defenses against infections by known viruses rely at present on immunization yet, for a variety of reasons, immunization is often only effective on a subset of the nodes in a network and many nodes remain unprotected. Little is known about either the way in which a viral infection proceeds in general or the way that immunization affects the infection process. We present the results of a simulation study of the way in which virus infections propagate through certain types of network and of the effect that partial immunization has on the infection. The key result is that relatively low levels of immunization can slow an infection significantly.", "", "We study a simple game theoretic model for the spread of an innovation in a network. The diffusion of the innovation is modeled as the dynamics of a coordination game in which the adoption of a common strategy between players has a higher payoff. Classical results in game theory provide a simple condition for an innovation to become widespread in the network. The present paper characterizes the rate of convergence as a function of graph structure. In particular, we derive a dichotomy between well-connected (e.g. random) graphs that show slow convergence and poorly connected, low dimensional graphs that show fast convergence.", "", "Complex interacting networks are observed in systems from such diverse areas as physics, biology, economics, ecology, and computer science. For example, economic or social interactions often organize themselves in complex network structures. Similar phenomena are observed in traffic flow and in communication networks as the internet. In current problems of the Biosciences, prominent examples are protein networks in the living cell, as well as molecular networks in the genome. On larger scales one finds networks of cells as in neural networks, up to the scale of organisms in ecological food webs. This book defines the field of complex interacting networks in its infancy and presents the dynamics of networks and their structure as a key concept across disciplines. The contributions present common underlying principles of network dynamics and their theoretical description and are of interest to specialists as well as to the non-specialized reader looking for an introduction to this new exciting field. Theoretical concepts include modeling networks as dynamical systems with numerical methods and new graph theoretical methods, but also focus on networks that change their topology as in morphogenesis and self-organization. The authors offer concepts to model network structures and dynamics, focussing on approaches applicable across disciplines." ] }
1906.00184
2947908969
Image-to-image translation models have shown remarkable ability on transferring images among different domains. Most of existing work follows the setting that the source domain and target domain keep the same at training and inference phases, which cannot be generalized to the scenarios for translating an image from an unseen domain to an another unseen domain. In this work, we propose the Unsupervised Zero-Shot Image-to-image Translation (UZSIT) problem, which aims to learn a model that can transfer translation knowledge from seen domains to unseen domains. Accordingly, we propose a framework called ZstGAN: By introducing an adversarial training scheme, ZstGAN learns to model each domain with domain-specific feature distribution that is semantically consistent on vision and attribute modalities. Then the domain-invariant features are disentangled with an shared encoder for image generation. We carry out extensive experiments on CUB and FLO datasets, and the results demonstrate the effectiveness of proposed method on UZSIT task. Moreover, ZstGAN shows significant accuracy improvements over state-of-the-art zero-shot learning methods on CUB and FLO.
Image generation has been widely investigated in recent years. Most of works focus on modeling the natural image distribution. Generative Adversarial Network (GAN) @cite_10 was firstly proposed to generate images from random variables by a two-player minimax game: a generator G tries to create fake but plausible images, while a discriminator D is trained to distinguish difference between real and fake images. To address the stability issues in GAN, Wasserstein-GAN (WGAN) @cite_26 was proposed to optimize an approximation of the Wasserstein distance. To further improve the vanishing and exploding gradient problems of WGAN, @cite_33 proposed a WGAN-GP that uses gradient penalty instead of the weight clipping to enforce the Lipschitz constrain in WGAN. @cite_16 also proposed a LSGAN and found that optimizing the least square cost function is the same as optimizing a Pearson @math divergence. In this paper, we combine with WGAN-GP @cite_33 to generate domain-specific features and translation images.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_33", "@cite_16" ], "mid": [ "2739748921", "2099471712", "2962879692", "2593414223" ], "abstract": [ "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.", "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs." ] }
1906.00184
2947908969
Image-to-image translation models have shown remarkable ability on transferring images among different domains. Most of existing work follows the setting that the source domain and target domain keep the same at training and inference phases, which cannot be generalized to the scenarios for translating an image from an unseen domain to an another unseen domain. In this work, we propose the Unsupervised Zero-Shot Image-to-image Translation (UZSIT) problem, which aims to learn a model that can transfer translation knowledge from seen domains to unseen domains. Accordingly, we propose a framework called ZstGAN: By introducing an adversarial training scheme, ZstGAN learns to model each domain with domain-specific feature distribution that is semantically consistent on vision and attribute modalities. Then the domain-invariant features are disentangled with an shared encoder for image generation. We carry out extensive experiments on CUB and FLO datasets, and the results demonstrate the effectiveness of proposed method on UZSIT task. Moreover, ZstGAN shows significant accuracy improvements over state-of-the-art zero-shot learning methods on CUB and FLO.
Zero-Shot Learning (ZSL) was first introduced by @cite_0 , where train and test classes are disjoint for object recognition. Traditional methods for ZSL are based on learning an embedding from the visual space to the semantic space. In the test period, the semantic vector of an unseen sample is extracted and the most likely class is predicted by nearest neighbor method @cite_5 @cite_4 @cite_19 . Recent works on ZSL have widely explored the idea of generative models. @cite_25 presented a deep generative model for ZSL based on VAE @cite_28 . Due to the rapidly developed GANs, other approaches used GANs to synthesize visual representations for the seen and unseen classes @cite_15 @cite_24 . However, the generate images usually lack sufficient quality to train a classifier for both the seen and unseen classes. Hence authors @cite_20 @cite_12 used GANs to synthesizes CNN features rather than image pixels conditioned on class-level semantic information. On the other hand, considering that ZSL is a domain shift problem, @cite_13 @cite_27 presented the Generalized ZSL (GZSL) that leverages both seen and unseen classes at test time.
{ "cite_N": [ "@cite_12", "@cite_4", "@cite_28", "@cite_0", "@cite_19", "@cite_24", "@cite_27", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "", "2950700180", "", "2134270519", "2950276680", "2963784503", "2400717490", "2564549723", "2762085884", "", "2964248207", "2963960318" ], "abstract": [ "", "Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing image classifier and a semantic word embedding model, which contains the @math class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.", "", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.", "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.", "This paper addresses the task of learning an image classifier when some categories are defined by semantic descriptions only (e.g. visual attributes) while the others are defined by exemplar images as well. This task is often referred to as the Zero-Shot classification task (ZSC). Most of the previous methods rely on learning a common embedding space allowing to compare visual features of unknown categories with semantic descriptions. This paper argues that these approaches are limited as i) efficient discriminative classifiers can't be used ii) classification tasks with seen and unseen categories (Generalized Zero-Shot Classification or GZSC) can't be addressed efficiently. In contrast, this paper suggests to address ZSC and GZSC by i) learning a conditional generator using seen classes ii) generate artificial training examples for the categories without exemplars. ZSC is then turned into a standard supervised learning problem. Experiments with 4 generative models and 5 datasets experimentally validate the approach, giving state-of-the-art results on both ZSC and GZSC.", "We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional zero-shot learning (ZSL) that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this trade-off. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet with more than 20,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper bound on the performance limit of GZSL. In particular, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving ZSL.", "General zero-shot learning (ZSL) approaches exploit transfer learning via semantic knowledge space. In this paper, we reveal a novel relational knowledge transfer (RKT) mechanism for ZSL, which is simple, generic and effective. RKT resolves the inherent semantic shift problem existing in ZSL through restoring the missing manifold structure of unseen categories via optimizing semantic mapping. It extracts the relational knowledge from data manifold structure in semantic knowledge space based on sparse coding theory. The extracted knowledge is then transferred backwards to generate virtual data for unseen categories in the feature space. On the one hand, the generalizing ability of the semantic mapping function can be enhanced with the added data. On the other hand, the mapping function for unseen categories can be learned directly from only these generated data, achieving inspiring performance. Incorporated with RKT, even simple baseline methods can achieve good results. Extensive experiments on three challenging datasets show prominent performance obtained by RKT, and we obtain 82.43 accuracy on the Animals with Attributes dataset.", "Sufficient training examples are the fundamental requirement for most of the learning tasks. However, collecting well-labelled training examples is costly. Inspired by Zero-shot Learning (ZSL) that can make use of visual attributes or natural language semantics as an intermediate level clue to associate low-level features with high-level classes, in a novel extension of this idea, we aim to synthesise training data for novel classes using only semantic attributes. Despite the simplicity of this idea, there are several challenges. First, how to prevent the synthesised data from over-fitting to training classes? Second, how to guarantee the synthesised data is discriminative for ZSL tasks? Third, we observe that only a few dimensions of the learnt features gain high variances whereas most of the remaining dimensions are not informative. Thus, the question is how to make the concentrated information diffuse to most of the dimensions of synthesised data. To address the above issues, we propose a novel embedding algorithm named Unseen Visual Data Synthesis (UVDS) that projects semantic features to the high-dimensional visual feature space. Two main techniques are introduced in our proposed algorithm. (1) We introduce a latent embedding space which aims to reconcile the structural difference between the visual and semantic spaces, meanwhile preserve the local structure. (2) We propose a novel Diffusion Regularisation (DR) that explicitly forces the variances to diffuse over most dimensions of the synthesised data. By an orthogonal rotation (more precisely, an orthogonal transformation), DR can remove the redundant correlated attributes and further alleviate the over-fitting problem. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data for zero-shot learning. Extensive experimental results suggest that our proposed approach significantly outperforms the state-of-the-art methods.", "", "", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings." ] }
1906.00117
2947794021
Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), to generate contrastive explanations for classification model where one is able to query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on structured data [13]. Moreover, to obtain meaningful explanations we propose a principled approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of the different data types of this nature was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively and qualitatively validate our approach over 5 public datasets covering diverse domains.
Trust and transparency of AI systems has received a lot of attention recently @cite_35 . Explainability is considered to be one of the cornerstones for building trustworthy systems and has been of particular focus in the research community @cite_16 @cite_37 . Researchers are trying to build better performing interpretable models @cite_5 @cite_14 @cite_9 @cite_32 @cite_36 @cite_24 as well as improved methods to understand black box models such as deep neural networks @cite_2 @cite_19 @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "", "2625712209", "2282479846", "2617799811", "1996796871", "2499224235", "2887252986", "1787224781", "2282821441", "2964134873", "2439568532", "2963276306" ], "abstract": [ "", "We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability. Finally, principled interpretable strategies are proposed and empirically evaluated on synthetic data, as well as on the largest public olfaction dataset that was made recently available olfs . We also experiment on MNIST with a simple target model and different oracle models of varying complexity. This leads to the insight that the improvement in the target model is not only a function of the oracle models performance, but also its relative complexity with respect to the target model.", "This paper proposes algorithms for learning two-level Boolean rules in Conjunctive Normal Form (CNF, i.e. AND-of-ORs) or Disjunctive Normal Form (DNF, i.e. OR-of-ANDs) as a type of human-interpretable classification model, aiming for a favorable trade-off between the classification accuracy and the simplicity of the rule. Two formulations are proposed. The first is an integer program whose objective function is a combination of the total number of errors and the total number of features used in the rule. We generalize a previously proposed linear programming (LP) relaxation from one-level to two-level rules. The second formulation replaces the 0-1 classification error with the Hamming distance from the current two-level rule to the closest rule that correctly classifies a sample. Based on this second formulation, block coordinate descent and alternating minimization algorithms are developed. Experiments show that the two-level rules can yield noticeably better performance than one-level rules due to their dramatically larger modeling capacity, and the two algorithms based on the Hamming distance formulation are generally superior to the other two-level rule learning methods in our comparison. A proposed approach to binarize any fractional values in the optimal solutions of LP relaxations is also shown to be effective.", "Interpretability has become incredibly important as machine learning is increasingly used to inform consequential decisions. We propose to construct global explanations of complex, blackbox models in the form of a decision tree approximating the original model---as long as the decision tree is a good approximation, then it mirrors the computation performed by the blackbox model. We devise a novel algorithm for extracting decision tree explanations that actively samples new training points to avoid overfitting. We evaluate our algorithm on a random forest to predict diabetes risk and a learned controller for cart-pole. Compared to several baselines, our decision trees are both substantially more accurate and equally or more interpretable based on a user study. Finally, we describe several insights provided by our interpretations, including a causal issue validated by a physician.", "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.", "Supporting human decision-making is a major goal of data mining. The more decision-making is critical, the more interpretability is required in the predictive model. This paper proposes a new framework to build a fully interpretable predictive model for questionnaire data, while maintaining a reasonable prediction accuracy with regard to the final outcome. Such a model has applications in project risk assessment, in healthcare, in social studies, and, presumably, in any real-world application that relies on questionnaire data for informative and accurate prediction. Our framework is inspired by models in item response theory (IRT), which were originally developed in psychometrics with applications to standardized academic tests. We extend these models, which are essentially unsupervised, to the supervised setting. For model estimation, we introduce a new iterative algorithm by combining Gauss---Hermite quadrature with an expectation---maximization algorithm. The learned probabilistic model is linked to the metric learning framework for informative and accurate prediction. The model is validated by three real-world data sets: Two are from information technology project failure prediction and the other is an international social survey about people's happiness. To the best of our knowledge, this is the first work that leverages the IRT framework to provide informative and accurate prediction on ordinal questionnaire data.", "In this paper, we propose a new method called ProfWeight for transferring information from a pre-trained deep neural network that has a high test accuracy to a simpler interpretable model or a very shallow network of low complexity and a priori low test accuracy. We are motivated by applications in interpretability and model deployment in severely memory constrained environments (like sensors). Our method uses linear probes to generate confidence scores through flattened intermediate representations. Our transfer method involves a theoretically justified weighting of samples during the training of the simple model using confidence scores of these intermediate layers. The value of our method is first demonstrated on CIFAR-10, where our weighting method significantly improves (3-4 ) networks with only a fraction of the number of Resnet blocks of a complex Resnet model. We further demonstrate operationally significant results on a real manufacturing problem, where we dramatically increase the test accuracy of a CART model (the domain standard) by roughly 13 .", "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "Falling rule lists are classification models consisting of an ordered list of if-then rules, where (i) the order of rules determines which example should be classified by each rule, and (ii) the estimated probability of success decreases monotonically down the list. These kinds of rule lists are inspired by healthcare applications where patients would be stratified into risk sets and the highest at-risk patients should be considered first. We provide a Bayesian framework for learning falling rule lists that does not rely on traditional greedy decision tree learning methods.", "Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for interpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.", "In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
In a @math -VAE @cite_14 , a free parameter @math multiplies the @math term in @math above. This objective @math remains a lower bound on the evidence.
{ "cite_N": [ "@cite_14" ], "mid": [ "2753738274" ], "abstract": [ "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
A is the total correlation (TC) for @math , a generalisation of mutual information to multiple variables @cite_25 . With this mean-field @math , Factor and @math -TCVAEs upweight this term, so we have an objective @math . @cite_28 gives a differentiable, stochastic approximation to @math , rendering this decomposition simple to use as a training objective using stochastic gradient descent. We also note that A , the total correlation, is also the objective in Independent Component Analysis (ICA) @cite_11 @cite_13 .
{ "cite_N": [ "@cite_28", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2964127395", "", "2095439994", "2108384452" ], "abstract": [ "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the beta-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the beta-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.", "", "A set λ of stochastic variables, y1, y2,..., yn, is grouped into subsets, µ1, µ2,..., µk. The correlation existing in λ with respect to the µ's is adequately expressed by C= Σi=1k S(µi)-S(λ)≥0, where S(v) is the entropy function defined with reference to the variables y in subset v. For a given λ, C becomes maximum when each µi consists of only one variable, (n=k). The value Cis then called fhe total correlation in λ, Ctot(λ). The present paper gives various theorems, according to which Ctot(λ) can be decomposed in terms of the partial correlations existing in subsets of λ, and of quantities derivable therefrom. The information-theoretical meaning of each decomposition is carefully explained. As illustrations, two problems are discussed at the end of the paper: (1) redundancy in geometrical figures in pattern recognition, and (2) randomization effect of shuffling cards marked \"zero' or \"one.\"", "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
We now have a set of @math layers of @math variables: @math . The evidence lower bound for models of this form is: The simplest VAE with a hierarchy of conditional stochastic variables in the generative model is the Deep Latent Gaussian Model @cite_5 . The forward model factorises as a chain: Each @math is a Gaussian distribution with mean and variance parameterised by deep nets. @math is a unit isotropic Gaussian.
{ "cite_N": [ "@cite_5" ], "mid": [ "2962897886" ], "abstract": [ "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation - rules for gradient backpropagation through stochastic variables - and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
To perform amortised variational inference one introduces a recognition network, which can be any directed acyclic graph where each node, each distribution over each @math , is Gaussian conditioned on its parents. This could be a chain, as in @cite_5 : Again, marginalising out intermediate @math layers, we see @math @math is a non-Gaussian, highly flexible, distribution.
{ "cite_N": [ "@cite_5" ], "mid": [ "2962897886" ], "abstract": [ "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation - rules for gradient backpropagation through stochastic variables - and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
This is the hierarchical version of the collapse of @math units in a single-layer VAE @cite_17 , but now the collapse is over entire layers @math . It is part of the motivation for the Ladder VAE @cite_1 and BIVA @cite_35 . 2pt
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_17" ], "mid": [ "2913002991", "2963135265", "2963275229" ], "abstract": [ "With the introduction of the variational autoencoder (VAE), probabilistic latent variable models have received renewed attention as powerful generative models. However, their performance in terms of test likelihood and quality of generated samples has been surpassed by autoregressive models without stochastic units. Furthermore, flow-based models have recently been shown to be an attractive alternative that scales well to high-dimensional data. In this paper we close the performance gap by constructing VAE models that can effectively utilize a deep hierarchy of stochastic variables and model complex covariance structures. We introduce the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path. We show that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution. We observe that BIVA, in contrast to recent results, can be used for anomaly detection. We attribute this to the hierarchy of latent variables which is able to extract high-level semantic features. Finally, we extend BIVA to semi-supervised classification tasks and show that it performs comparably to state-of-the-art results by generative adversarial networks.", "Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch-normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers.", "Abstract: The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
One of the first optimizations was the DHP algorithm proposed by @cite_25 . This algorithm uses a hashing scheme to collect upper bounds on the frequencies of the candidate patterns for the following pass. Patterns of which it is already known that they will turn up infrequent can then be eliminated from further consideration. The effectiveness of this technique only showed for the first few passes. Since our upper bound can be used to eliminate passes at the end, both techniques can be combined in the same algorithm.
{ "cite_N": [ "@cite_25" ], "mid": [ "2030969394" ], "abstract": [ "In this paper, we examine the issue of mining association rules among items in a large database of sales transactions. The mining of association rules can be mapped into the problem of discovering large itemsets where a large itemset is a group of items which appear in a sufficient number of transactions. The problem of discovering large itemsets can be solved by constructing a candidate set of itemsets first and then, identifying, within this candidate set, those itemsets that meet the large itemset requirement. Generally this is done iteratively for each large k-itemset in increasing order of k where a large k-itemset is a large itemset with k items. To determine large itemsets from a huge number of candidate large itemsets in early iterations is usually the dominating factor for the overall data mining performance. To address this issue, we propose an effective hash-based algorithm for the candidate set generation. Explicitly, the number of candidate 2-itemsets generated by the proposed algorithm is, in orders of magnitude, smaller than that by previous methods, thus resolving the performance bottleneck. Note that the generation of smaller candidate sets enables us to effectively trim the transaction database size at a much earlier stage of the iterations, thereby reducing the computational cost for later iterations significantly. Extensive simulation study is conducted to evaluate performance of the proposed algorithm." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
The sampling algorithm proposed by Toivonen @cite_3 performs at most two scans through the database by picking a random sample from the database, then finding all frequent patterns that probably hold in the whole database, and then verifying the results with the rest of the database. In the cases where the sampling method does not produce all frequent patterns, the missing patterns can be found by generating all remaining potentially frequent patterns and verifying their frequencies during a second pass through the database. The probability of such a failure can be kept small by decreasing the minimal support threshold. However, for a reasonably small probability of failure, the threshold must be drastically decreased, which can again cause a combinatorial explosion of the number of candidate patterns.
{ "cite_N": [ "@cite_3" ], "mid": [ "1597561788" ], "abstract": [ "Discovery of association rules .is an important database mining problem. Current algorithms for finding association rules require several passes over the analyzed database, and obviously the role of I O overhead is very significant for very large databases. We present new algorithms that reduce the database activity considerably. The idea is to pick a Random sample, to find using this sample all association rules that probably hold in the whole database, and then to verify the results with the rest of the database. The algorithms thus produce exact association rules, not approximations based on a sample. The approach is, however, probabilistic, and in those rare cases where our sampling method does not produce all association rules, the missing rules can be found in a second pass. Our experiments show that the proposed algorithms can find association rules very efficiently in only one database" ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
The DIC algorithm, proposed by @cite_12 , tries to reduce the number of passes over the database by dividing the database into intervals of a specific size. First, all candidate patterns of size @math are generated. The frequencies of the candidate sets are then counted over the first interval of the database. Based on these frequencies, candidate patterns of size @math are generated and are counted over the next interval together with the patterns of size @math . In general, after every interval @math , candidate patterns of size @math are generated and counted. The algorithm stops if no more candidates can be generated. Again, this technique can be combined with our technique in the same algorithm.
{ "cite_N": [ "@cite_12" ], "mid": [ "2037965136" ], "abstract": [ "We consider the problem of analyzing market-basket data and present several important contributions. First, we present a new algorithm for finding large itemsets which uses fewer passes over the data than classic algorithms, and yet uses fewer candidate itemsets than methods based on sampling. We investigate the idea of item reordering, which can improve the low-level efficiency of the algorithm. Second, we present a new way of generating “implication rules,” which are normalized based on both the antecedent and the consequent and are truly implications (not simply a measure of co-occurrence), and we show how they produce more intuitive results than other methods. Finally, we show how different characteristics of real data, as opposed by synthetic data, can dramatically affect the performance of the system and the form of the results." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
Also, current state-of-the-art algorithms for frequent itemset mining, such as Opportunistic Project @cite_6 and DCI @cite_14 use several techniques within the same algorithm and switch between these techniques using several simple, but not waterproof heuristics.
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2064853889", "2032226242" ], "abstract": [ "Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods.", "In this paper we propose algorithms for generation of frequent item sets by successive construction of the nodes of a lexicographic tree of item sets. We discuss different strategies in generation and traversal of the lexicographic tree such as breadth-first search, depth-first search, or a combination of the two. These techniques provide different trade-offs in terms of the I O, memory, and computational time requirements. We use the hierarchical structure of the lexicographic tree to successively project transactions at each node of the lexicographic tree and use matrix counting on this reduced set of transactions for finding frequent item sets. We tested our algorithm on both real and synthetic data. We provide an implementation of the tree projection method which is up to one order of magnitude faster than other recent techniques in the literature. The algorithm has a well-structured data access pattern which provides data locality and reuse of data for multiple levels of the cache. We also discuss methods for parallelization of the TreeProjection algorithm." ] }
cs0112011
1632815424
We investigate ways to support interactive mining sessions, in the setting of association rule mining. In such sessions, users specify conditions (queries) on the associations to be generated. Our approach is a combination of the integration of querying conditions inside the mining phase, and the incremental querying of already generated associations. We present several concrete algorithms and compare their performance.
The idea that queries can be integrated in the mining algorithm was initially launched by Srikant, Vu, and Agrawal @cite_24 , who considered queries that are Boolean expressions over the presence or absence of certain items in the rules. Queries specifically as bodies or heads were not discussed. The authors considered three different approaches to the problem. The proposed algorithms are not optimal: they generate and test several itemsets that do not satisfy the query, and their optimizations also do not always become more efficient for more specific queries.
{ "cite_N": [ "@cite_24" ], "mid": [ "1538285186" ], "abstract": [ "The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a post-processing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that are Boolean expressions over the presence or absence of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs." ] }
cs0112011
1632815424
We investigate ways to support interactive mining sessions, in the setting of association rule mining. In such sessions, users specify conditions (queries) on the associations to be generated. Our approach is a combination of the integration of querying conditions inside the mining phase, and the incremental querying of already generated associations. We present several concrete algorithms and compare their performance.
Also Lakshmanan, Ng, Han and Pang worked on the integration of constraints on itemsets in mining, considering conjunctions of conditions on itemsets such as those considered here, as well as others (arbitrary Boolean combinations were not discussed) @cite_9 @cite_7 . Of the various strategies for the so-called CAP'' algorithm they present, the one that can handle the queries considered in the present paper is their strategy II''. Again, this strategy generates and tests itemsets that do not satisfy the query. Also, their algorithms implement a rule-query by separately mining for possible heads and for possible bodies, while we tightly couple the querying of rules with the querying of sets. This work has also been further studied by Pei, Han and Lakshmanan @cite_23 @cite_2 , and employed within the FPgrowth algorithm.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_7", "@cite_2" ], "mid": [ "2125714474", "1972870286", "2085638007", "2148693963" ], "abstract": [ "Currently, there is tremendous interest in providing ad-hoc mining capabilities in database management systems. As a first step towards this goal, in [15] we proposed an architecture for supporting constraint-based, human-centered, exploratory mining of various kinds of rules including associations, introduced the notion of constrained frequent set queries (CFQs), and developed effective pruning optimizations for CFQs with 1-variable (1-var) constraints. While 1-var constraints are useful for constraining the antecedent and consequent separately, many natural examples of CFQs illustrate the need for constraining the antecedent and consequent jointly, for which 2-variable (2-var) constraints are indispensable. Developing pruning optimizations for CFQs with 2-var constraints is the subject of this paper. But this is a difficult problem because: (i) in 2-var constraints, both variables keep changing and, unlike 1-var constraints, there is no fixed target for pruning; (ii) as we show, “conventional” monotonicity-based optimization techniques do not apply effectively to 2-var constraints. The contributions are as follows. (1) We introduce a notion of quasi-succinctness , which allows a quasi-succinct 2-var constraint to be reduced to two succinct 1-var constraints for pruning. (2) We characterize the class of 2-var constraints that are quasi-succinct. (3) We develop heuristic techniques for non-quasi-succinct constraints. Experimental results show the effectiveness of all our techniques. (4) We propose a query optimizer for CFQs and show that for a large class of constraints, the computation strategy generated by the optimizer is ccc-optimal , i.e., minimizing the effort incurred w.r.t. constraint checking and support counting.", "", "From the standpoint of supporting human-centered discovery of knowledge, the present-day model of mining association rules suffers from the following serious shortcomings: (i) lack of user exploration and control, (ii) lack of focus, and (iii) rigid notion of relationships. In effect, this model functions as a black-box, admitting little user interaction in between. We propose, in this paper, an architecture that opens up the black-box, and supports constraint-based, human-centered exploratory mining of associations. The foundation of this architecture is a rich set of constraint constructs, including domain, class, and SQL-style aggregate constraints, which enable users to clearly specify what associations are to be mined. We propose constrained association queries as a means of specifying the constraints to be satisfied by the antecedent and consequent of a mined association. In this paper, we mainly focus on the technical challenges in guaranteeing a level of performance that is commensurate with the selectivities of the constraints in an association query. To this end, we introduce and analyze two properties of constraints that are critical to pruning: anti-monotonicity and succinctness . We then develop characterizations of various constraints into four categories, according to these properties. Finally, we describe a mining algorithm called CAP, which achieves a maximized degree of pruning for all categories of constraints. Experimental results indicate that CAP can run much faster, in some cases as much as 80 times, than several basic algorithms. This demonstrates how important the succinctness and anti-monotonicity properties are, in delivering the performance guarantee.", "Recent work has highlighted the importance of the constraint based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. The authors study constraints which cannot be handled with existing theory and techniques. For example, avg(S) spl theta spl nu , median(S) spl theta spl nu , sum(S) spl theta spl nu (S can contain items of arbitrary values) ( spl theta spl isin spl ges , spl les ), are customarily regarded as \"tough\" constraints in that they cannot be pushed inside an algorithm such as a priori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Chord @cite_0 and CFS @cite_7 suggest alternatives to making the query response travel down the reverse query path back to the query issuer. Chord suggests iterative searches where the query issuer contacts each node on the query path one-by-one for the item of interest until one of the nodes is found to have the item. CFS suggests that the query be forwarded from node to node until a node is found to have the item. This node then directly sends the query response back to the issuer. Both of these approaches help avoid some of the long latencies that may occur as the query response traverses the reverse query path. CUP is advantageous regardless of whether the response is delivered directly to the issuer or through the reverse query path. However, to make this work for direct response delivery, CUP must not coalesce queries for the same item at a node into one query since each query would need to explicitly carry the return address information of the query issuer.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2118428193", "2150676586" ], "abstract": [ "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Consistent hashing work by @cite_3 looks at relieving hot spots at origin web servers by caching at intermediate caches between client caches and origin servers. Requests for items originate at the leaf clients of a conceptual tree and travel up through intermediate caches toward the origin server at the root of the tree. This work uses a model slightly different from the peer-to-peer model. Their model and analysis assume requests are made only at leaf clients and that intermediate caches do not store an item until it has been requested some threshold number of times. Also, this work does not focus on maintaining cache freshness.
{ "cite_N": [ "@cite_3" ], "mid": [ "2020765652" ], "abstract": [ "We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP IP, and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and or quorum systems." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Cohen and Kaplan study the effect that aging through cascaded caches has on the miss rates of web client caches @cite_10 . For each object an intermediate cache refreshes its copy of the object when its age exceeds a fraction of the lifetime duration. The intermediate cache does not push this refresh to the client; instead, the client waits until its own copy has expired at which point it fetches the intermediate cache's copy with the remaining lifetime. For some sequences of requests at the client cache and some 's, the client cache can suffer from a higher miss rate than if the intermediate cache only refreshed on expiration. Their model assumes zero communication delay. A CUP tree could be viewed as a series of cascaded caches in that each node depends on the previous node in the tree for updates to an index entry. The key difference is that in CUP, refreshes are pushed down the entire tree of interested nodes. Therefore, barring communication delays, whenever a parent cache gets a refresh so does the interested child node. In such situations, the miss rate at the child node actually improves.
{ "cite_N": [ "@cite_10" ], "mid": [ "2157170064" ], "abstract": [ "The Web is a distributed system, where data is stored and disseminated from both origin servers and caches. Origin servers provide the most up-to-date copy whereas caches store and serve copies that had been cached for a while. Origin servers do not maintain per-client state, and weak-consistency of cached copies is maintained by the origin server attaching to each copy an expiration time. Typically, the lifetime-duration of an object is fixed, and as a result, a copy fetched directly from its origin server has maximum time-to-live (TTL) whereas a copy obtained through a cache has a shorter TTL since its age (elapsed time since fetched from the origin) is deducted from its lifetime duration. Thus, a cache that is served from a cache would incur a higher miss-rate than a cache served from origin servers. Similarly, a high-level cache would receive more requests from the same client population than an origin server would have received. As Web caches are often served from other caches (e.g., proxy and reverse-proxy caches), age emerges as a performance factor. Guided by a formal model and analysis, we use different inter-request time distributions and trace-based simulations to explore the effect of age for different cache settings and configurations. We also evaluate the effectiveness of frequent pre-term refreshes by higher-level caches as a means to decrease client misses. Beyond Web content distribution, our conclusions generally apply to systems of caches deploying expiration-based consistency." ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
Formal program transformation @cite_0 separates two concerns: the development of an initial, maybe inefficient program the correctness of which can easily be shown, and the stepwise derivation of a better implementation in a semantics-preserving manner. Partsch's textbook @cite_8 describes the formal approach to this kind of software development. Pettorossi and Proietti study typical transformation rules (for functional and logic) programs in @cite_17 . Formal program transformation, in part, also addresses datatype transformation @cite_3 , say data refinement. Here, one gives different axiomatisations or implementations of an abstract datatype which are then related by well-founded transformation steps. This typically involves some amount of mathematical program calculation. By contrast, we deliberately focus on the more syntactical transformations that a programmer uses anyway to adapt evolving programs.
{ "cite_N": [ "@cite_0", "@cite_17", "@cite_3", "@cite_8" ], "mid": [ "2023299380", "1988290118", "1554561354", "" ], "abstract": [ "A system of rules for transforming programs is described, with the programs in the form of recursion equations. An initially very simple, lucid, and hopefully correct program is transformed into a more efficient one by altering the recursion structure. Illustrative examples of program transformations are given, and a tentative implementation is described. Alternative structures for programs are shown, and a possible initial phase for an automatic or semiautomatic program-manipulation system is indicated.", "We present an overview of the program transformation methodology, focusing our attention on the so-called “rules + strategies” approach in the case of functional and logic programs. The paper is intended to offer an introduction to the subject. The various techniques we present are illustrated via simple examples.", "The process of software development is gradually achieving more rigor. Proficient developers now construct software indirectly through the abstraction of models. Models allow a developer to focus on the essential aspects of an application and defer details. Transformations extend the power of models, as the developer can substitute refinement and optimization of models for tedious manipulation of code. We catalog object modeling transformations that we have encountered in our application work.", "" ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
There is a large body of research addressing the related problem of database schema evolution @cite_11 as relevant, for example, in database re- and reverse engineering @cite_9 . The schema transformations themselves can be compared with our datatype transformations only at a superficial level because of the different formalisms involved. There exist formal frameworks for the definition of schema transformations and various formalisms have been investigated @cite_6 . An interesting aspect of database schema evolution is that schema evolution necessitates a database instance mapping @cite_18 . Compare this with the evolution of the datatypes in a functional program. Here, the main concern is to update the function declarations for compliance with the new datatypes. It seems that the instance mapping problem is a special case of the program update problem.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_6", "@cite_11" ], "mid": [ "1544920330", "", "2583677609", "2215315499" ], "abstract": [ "The paper presents a DBMS-independent database reverse engineering (DBRE) methodology based on a generic process model and on transformation techniques. DBRE is proposed as a two-phase process consisting in recovering the DBMS-dependent data structures (data structure extraction) then in recovering their semantics (data structure conceptualization). The second phase, that is strongly linked with the logical design phase of current database design methodologies, can be performed by application of a selected set of standard schema restructuring techniques, or schema transformations. The paper illustrates the methodology by applying it to various DBRE processes : removing optimization structures, untransfating Relational, COBOL, CODASYL, TOTAL IMAGE and IMS database as well as file structures, and finally conceptual normalization.", "", "Several methodologies for semantic schema integration have been proposed in the literature, often using some variant of the ER model as the common data model. As part of these methodologies, various transformations have been defined that map between ER schemas which are in some sense equivalent. This paper gives a unifying formalisation of the ER schema transformation process and shows how some common schema transformations can be expressed within this single framework. Our formalism clearly identifies which transformations apply for any instance of the schema and which only for certain instances.", "Object-oriented programming is well-suited to such data-intensive application domains as CAD CAM, AI, and OIS (office information systems) with multimedia documents. At MCC we have built a prototype object-oriented database system, called ORION. It adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. One of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. In this paper, following a brief review of the object-oriented data model that we support in ORION, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation." ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
The transformational approach to program evolution is nowadays called refactoring @cite_19 @cite_4 , but the idea is not new @cite_15 @cite_12 . Refactoring means to improve the structure of code so that it becomes more comprehensible, maintainable, and adaptable. Interactive refactoring tools are being studied and used extensively in the object-oriented programming context @cite_16 @cite_21 . Typical examples of program refactorings are described in @cite_5 , e.g., the introduction of a monad in a non-monadic program. The precise inhabitation of the refactoring notion for functional programming is being addressed in a project at the University of Kent by Thompson and Reinke; see @cite_7 . There is also related work on type-safe meta-programming in a functional context, e.g., by Erwig @cite_13 . Previous work did not specifically address datatype transformations. The refactorings for object-oriented class structures are not directly applicable because of the different structure and semantics of classes vs. algebraic datatypes.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2153887189", "1487314578", "2172168118", "1487664366", "1546190629", "2133528651", "1996710532", "2120426933", "1521332670" ], "abstract": [ "Almost every expert in Object-Oriented Development stresses the importance of iterative development. As you proceed with the iterative development, you need to add function to the existing code base. If you are really lucky that code base is structured just right to support the new function while still preserving its design integrity. Of course most of the time we are not lucky, the code does not quite fit what we want to do. You could just add the function on top of the code base. But soon this leads to applying patch upon patch making your system more complex than it needs to be. This complexity leads to bugs, and cripples your productivity.", "Refactoring is the process of improving the design of existing programs without changing their functionality. These notes cover refactoring in functional languages, using Haskell as the medium, and introducing the HaRe tool for refactoring in Haskell.", "Refactoring is an important part of the evolution of reusable software and frameworks. Its uses range from the seemingly trivial, such as renaming program elements, to the profound, such as retrofitting design patterns into an existing system. Despite its importance, lack of tool support forces programmers to refactor programs by hand, which can be tedious and error-prone. The Smalltalk Refactoring Browser is a tool that carries out many refactorings automatically, and provides an environment for improving the structure of Smalltalk programs. It makes refactoring safe and simple, and so reduces the cost of making reusable software. © 1997 John Wiley & Sons, Inc.", "This thesis defines a set of program restructuring operations (refactorings) that support the design, evolution and reuse of object-oriented application frameworks. The focus of the thesis is on automating the refactorings in a way that preserves the behavior of a program. The refactorings are defined to be behavior preserving, provided that their preconditions are met. Most of the refactorings are simple to implement and it is almost trivial to show that they are behavior preserving. However, for a few refactorings, one or more of their preconditions are in general undecidable. Fortunately, for some cases it can be determined whether these refactorings can be applied safely. Three of the most complex refactorings are defined in detail: generalizing the inheritance hierarchy, specializing the inheritance hierarchy and using aggregations to model the relationships among classes. These operations are decomposed into more primitive parts, and the power of these operations is discussed from the perspectives of automatability and usefulness in supporting design. Two design constraints needed in refactoring are class invariants and exclusive components. These constraints are needed to ensure that behavior is preserved across some refactorings. This thesis gives some conservative algorithms for determining whether a program satisfies these constraints, and describes how to use this design information to refactor a program.", "This invention relates to dispersion strengthening of met als. A coherent mass comprising an intimate blend of alloy powder and oxidant is formed prior to dispersion strengthening. Said coherent mass is easily formed because the alloy powder is not yet strengthened, and undergoes internal oxidation rapidly because of the intimate blend of alloy powder and oxidant.", "Porting an undocumented program without any source changes demonstrates the value of a transformational theory of maintenance. The theory is based on the reuse of knowledge.", "Most, object-oriented programs have imperfectly designed inheritance hierarchies and imperfectly factored methods, and these imperfections tend to increase with maintenance. Hence, even object-oriented programs are more expensive to maintain, harder to understand and larger than necessary. Automatic restructuring of inheritance hierarchies and refactoring of methods can improve the design of inheritance hierarchies, and the factoring of methods. This results in programs being smaller, having better code re-use and being more consistent. This paper describes Guru, a prototype tool for automatic inheritance hierarchy restructuring and method refactoring of Self programs. Results from realistic applications of the tool are presented.", "We describe the design of a rule-based language for expressing changes to Haskell programs in a systematic and reliable way. The update language essentially offers update commands for all constructs of the object language (a subset of Haskell). The update language can be translated into a core calculus consisting of a small set of basic updates and update combinators. The key construct of the core calculus is a scope update mechanism that allows (and enforces) update specifications for the definition of a symbol together with all of its uses.The type of an update program is given by the possible type changes it can cause for an object programs. We have developed a type-change inference system to automatically infer type changes for up-dates. Updates for which a type change can be successfully inferred and that satisfy an additional structural condition can be shown to preserve type correctness of object programs.In this paper we define the Haskell Update Language HULA and give a translation into the core update calculus. We illustrate HULA and its translation into the core calculus by several examples.", "Maintenance tends to degrade the structure of software, ultimately making maintenance more costly. At times, then, it is worthwhile to manipulate the structure of a system to make changes easier. However, it is shown that manual restructuring is an error-prone and expensive activity. By separating structural manipulations from other maintenance activities, the semantics of a system can be held constant by a tool, assuring that no errors are introduced by restructuring. To allow the maintenance team to focus on the aspects of restructuring and maintenance requiring human judgment, a transformation-based tool can be provided--based on a model that exploits preserving data flow-dependence and control flow-dependence--to automate the repetitive, error-prone, and computationally demanding aspects of restructuring. A set of automatable transformations is introduced; their impact on structure is described, and their usefulness is demonstrated in examples. A model to aid building meaning-preserving restructuring transformations is described, and its realization in a functioning prototype tool for restructuring Scheme programs is discussed." ] }
cs0206040
2949146353
Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.
A variety of approaches have been proposed to programming Grid applications, including object systems @cite_35 @cite_40 , Web technologies @cite_11 @cite_26 , problem solving environments @cite_28 @cite_34 , CORBA, workflow systems, high-throughput computing systems @cite_27 @cite_8 , and compiler-based systems @cite_3 . We assume that while different technologies will prove attractive for different purposes, a programming model such as MPI that allows direct control over low-level communications will always be attractive for certain applications.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_8", "@cite_28", "@cite_3", "@cite_40", "@cite_27", "@cite_34", "@cite_11" ], "mid": [ "", "2147342997", "2140332639", "2006523031", "", "", "2162232794", "2019693854", "" ], "abstract": [ "", "Demonstrates the power of providing a common set of operating system services to wide-area applications, including mechanisms for naming, persistent storage, remote process execution, resource management, authentication and security. On a single machine, application developers can rely on the local operating system to provide these abstractions. In the wide area, however, application developers are forced to build these abstractions themselves or to do without. This ad-hoc approach often results in individual programmers implementing non-optimal solutions, wasting both programmer effort and system resources. To address these problems, we are building a system, WebOS, that provides the basic operating systems services needed to build applications that are geographically distributed, highly available, incrementally scalable and dynamically reconfigurable. Experience with a number of applications developed under WebOS indicates that it simplifies system development and improves resource utilization. In particular, we use WebOS to implement Rent-A-Server to provide dynamic replication of overloaded Web services across the wide area in response to client demands.", "The design, implementation, and performance of the Condor scheduling system, which operates in a workstation environment, are presented. The system aims to maximize the utilization of workstations with as little interference as possible between the jobs it schedules and the activities of the people who own workstations. It identifies idle workstations and schedules background jobs on them. When the owner of a workstation resumes activity at a station, Condor checkpoints the remote job running on the station and transfers it to another workstation. The system guarantees that the job will eventually complete, and that very little, if any, work will be performed more than once. A performance profile of the system is presented that is based on data accumulated from 23 stations during one month. >", "This paper presents a new system, called NetSolve, that allows users to access computational resources, such as hardware and software, distributed across the network. The development of NetSolve was motivated by the need for an easy-to-use, efficient mechanism for using computational resources remotely. Ease of use is obtained as a result of different interfaces, some of which require no programming effort from the user. Good performance is ensured by a load-balancing policy that enables NetSolve to use the computational resources available as efficiently as possible. NetSolve offers the ability to look for computational resources on a network, choose the best one availab le, solve a problem (with retry for fault-tolerance), and return the answer to the user.", "", "", "This paper discusses Nimrod, a tool for performing parametrised simulations over networks of loosely coupled workstations. Using Nimrod the user interactively generates a parametrised experiment. Nimrod then controls the distribution of jobs to machines and the collection of results. A simple graphical user interface which is built for each application allows the user to view the simulation in terms of their problem domain. The current version of Nimrod is implemented above OSF DCE and runs on DEC Alpha and IBM RS6000 workstations (including a 22 node SP2). Two different case studies are discussed as an illustration of the utility of the system.", "Abstract The world-wide computing infrastructure on the growing computer network technology is a leading technology to make a variety of information services accessible through the Internet for every user from the high-performance computing users through many of personal computing users. The important feature of such services is location transparency; information can be obtained irrespective of time or location in virtually shared manner. In this article, we overview Ninf, an ongoing global network-wide computing infrastructure project which allows users to access computational resources including hardware, software and scientific data distributed across a wide area network. Preliminary performance result on measuring software and network overhead is shown, and that promises the future reality of world-wide network computing.", "" ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
There is a large body of literature dealing with verification of protocols. Verification systems typically address well-defined properties --such as safety , liveness , and responsiveness @cite_41 -- and aim to detect violations of these properties. In general, the two main approaches for protocol verification are theorem proving and reachability analysis @cite_2 . Theorem proving systems define a set of axioms and relations to prove properties, and include model-based and logic-based formalisms @cite_39 @cite_22 . These systems are useful in many applications. However, these systems tend to abstract out some network dynamics that we study (e.g., selective packet loss). Moreover, they do not synthesize network topologies and do not address performance issues per se.
{ "cite_N": [ "@cite_41", "@cite_22", "@cite_39", "@cite_2" ], "mid": [ "2005307131", "2119895020", "2099078002", "2034717157" ], "abstract": [ "This paper addresses the problem of designing stabilizing computer communication protocols modelled by communicating finite state machines. A communication protocol is said to be stabilizing if, starting from or reaching any illegal state, the protocol will eventually reach a legal (or consistent) state, and resume its normal execution. To achieve stabilization, the protocol must be able to detect the error as soon as it occurs, and then it must recover from that error and revert back to a legal protocol state. The later issue related to recovery is tackled here, and an efficient procedure for the recovery in communications protocols is described. The recovery procedure does not require periodic checkpointing and, therefore, is less intrusive. It requires less time for rollback and fewer recovery control messages than other procedures. Only a minimal number of processes will roll back, and a minimal number of protocol messages will be retransmitted during recovery. Moreover, our procedure requires minimal stable storage to be used to record contextual information exchanged during the progress of the protocol. Finally, our procedure is compared with an existing recovery procedure, and an illustrative example is provided.", "By providing a formal semantics for Z, this book justifies the claim that Z is a precise specification language, and provides a standard framework for understanding Z specifications. It makes a detailed theoretical comparison between schemas, the Z construct for breaking specifications into modules, and the analogous facilities in other languages such as CLEAR and ASL. The final chapter contains a number of studies in Z style, showing that Z can be used for a wide variety of specification tasks.", "Contains a precise and complete description of the computational logic develo by the authors; will serve also as a reference guide to the associated mechanical theorem proving system. Annotation copyright Book News, Inc. Portland, Or.", "Hardware and software systems will inevitably grow in scale and functionality. Because of this increase in complexity, the likelihood of subtle errors is much greater. Moreover, some of these errors may cause catastrophic loss of money, time, or even human life. A major goal of software engineering is to enable developers to construct systems that operate reliably despite this complexity. One way of achieving this goal is by using formal methods, which are mathematically based languages, techniques, and tools for specifying and verifying such systems. Use of formal methods does not a priori guarantee correctness. However, they can greatly increase our understanding of a system by revealing inconsistencies, ambiguities, and incompleteness that might otherwise go undetected. The first part of this report assesses the state of the art in specification and verification. For verification, we highlight advances in model checking and theorem proving. In the three sections on specification, model checking, and theorem proving, we explain what we mean by the general technique and briefly describe some successful case studies and well-known tools. The second part of this report outlines future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer. We close with summary remarks and pointers to resources for more information." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
Reachability analysis algorithms @cite_21 , on the other hand, try to inspect reachable protocol states, and suffer from the state space explosion' problem. To circumvent this problem, state reduction techniques could be used @cite_25 . These algorithms, however, do not synthesize network topologies. Reduced reachability analysis has been used in the verification of cache coherence protocols @cite_0 , using a global FSM model. We adopt a similar FSM model and extend it for our approach in this study. However, our approach differs in that we address end-to-end protocols, that encompass rich timing, delay, and loss semantics, and we address performance issues (such as overhead or response delays).
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_25" ], "mid": [ "2033263591", "2023949272", "1540501475" ], "abstract": [ "In this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking , and symbolic state models . Since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. To be successful for systems of arbitrary complexity, a verification technique must solve this so-called state space explosion problem. The emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). We compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. Also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. No method is truly superior because each method has its own strengths and weaknesses. Finally, refinements that can further reduce the verification time and or the memory requirement are also discussed.", "Reachability analysis has proved to be one of the most effective methods in verifying correctness of communication protocols based on the state transition model. Consequently, many protocol verification tools have been built based on the method of reachability analysis. Nevertheless, it is also well known that state space explosion is the most severe limitation to the applicability of this method. Although researchers in the field have proposed various strategies to relieve this intricate problem when building the tools, a survey and evaluation of these strategies has not been done in the literature. In searching for an appropriate approach to tackling such a problem for a grammar-based validation tool, we have collected and evaluated these relief strategies, and have decided to develop our own from yet another but more systematic approach. The results of our research are now reported in this paper. Essentially, the paper is to serve two purposes: first, to give a survey and evaluation of existing relief strategies; second, to propose a new strategy, called PROVAT (PROtocol VAlidation Testing), which is inspired by the heuristic search techniques in Artificial Intelligence. Preliminary results of incorporating the PROVAT strategy into our validation tool are reviewed in the paper. These results show the empirical evidence of the effectiveness of the PROVAT strategy.", "In this paper, we present a verification method for concurrent finite-state systems that attempts to avoid the part of the combinatorial explosion due to the modeling of concurrency by interleavings. The behavior of a system is described in terms of partial orders (more precisely in terms of Mazurkiewicz's traces) rather than in terms of interleavings. We introduce the notion of “trace automation” which generates only one linearization per partial order. Then we show how to use trace automata to prove program correctness." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
There is a good number of publications dealing with conformance testing @cite_24 @cite_27 @cite_26 @cite_5 . However, conformance testing verifies that an implementation (as a black box) adheres to a given specification of the protocol by constructing input output sequences. Conformance testing is useful during the implementation testing phase --which we do not address in this paper-- but does not address performance issues nor topology synthesis for design testing. By contrast, our method synthesizes test scenarios for protocol design, according to evaluation criteria.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_27", "@cite_26" ], "mid": [ "2073074602", "1965028135", "2158348156", "2029755436" ], "abstract": [ "Abstract We present simple randomized algorithms for the fault detection problem : Given a specification in the form of a deterministic finite state machine A and an implementation machine B , determine whether B is equal to A . If A has n states and p inputs, then in randomized polynomial time we can construct with high probability a checking sequence of length O ( pn 4 log n ), i.e., a sequence that detects all faulty machines with at most n states. Better bounds can be obtained in certain cases. The techniques generalize to partially specified finite state machines.", "Abstract A procedure presented here generates test sequences for checking the conformity of an implementation to the control portion of a protocol specification, which is modeled as a deterministic finite-state machine (FSM). A test sequence generated by the procedure given here tours all state transitions and uses a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an input output behavior that is not exhibited by any other state. An algorithm is presented for generating a minimum-length UIO sequence, should it exist, for a given state. UIO sequences may not exist for some states.", "Abstract Now that many international standards for Open Systems Interconnection (OSI) services and protocols have reached at least the Draft International Standard stage, there is much international activity in standardizing OSI conformance test suites. This new activity is based on the emerging OSI conformance testing methodology and framework standards being progressed by ISO and CCITT. This paper presents the major aspects of this methodology and framework.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
Automatic test generation techniques have been used in several fields. VLSI chip testing @cite_16 uses test vector generation to detect target faults. Test vectors may be generated based on circuit and fault models, using the fault-oriented technique, that utilizes implication techniques. These techniques were adopted in @cite_29 to develop fault-oriented test generation (FOTG) for multicast routing. @cite_29 , FOTG was used to study correctness of a multicast routing protocol on a LAN. We extend FOTG to study performance of end-to-end multicast mechanisms. We introduce the concept of a virtual LAN to represent the underlying network, integrate timing and delay semantics into our model and use performance criteria to drive our synthesis algorithm.
{ "cite_N": [ "@cite_29", "@cite_16" ], "mid": [ "1962021926", "1554885925" ], "abstract": [ "We present a new algorithm for automatic test generation for multicast routing. Our algorithm processes a finite state machine (FSM) model of the protocol and uses a mix of forward and backward search techniques to generate the tests. The output tests include a set of topologies, protocol events and network failures, that lead to violation of protocol correctness and behavioral requirements. We target protocol robustness in specific, and do not attempt to verify other properties in this paper. We apply our method to a multicast routing protocol; PIM-DM, and investigate its behavior in the presence of selective packet loss on LANs and router crashes. Our study unveils several robustness violations in PIM-DM, for which we suggest fixes with the aid of the presented algorithm.", "For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
@cite_36 , a simulation-based stress testing framework based on heuristics was proposed. However, that method does not provide automatic topology generation, nor does it address performance issues. The VINT @cite_17 tools provide a framework for Internet protocols simulation. Based on the network simulator (NS) @cite_31 and the network animator (NAM) @cite_23 , VINT provides a library of protocols and a set of validation test suites. However, it does not provide a generic tool for generating these tests automatically. Work in this paper is complementary to such studies, and may be integrated with network simulation tools similar to our work in .
{ "cite_N": [ "@cite_36", "@cite_31", "@cite_23", "@cite_17" ], "mid": [ "1544819167", "2031849950", "1835764870", "1494182064" ], "abstract": [ "We propose a method for using simulation to analyze the robustness of multiparty (multicast-based) protocols in a systematic fashion. We call our method Systematic Testing of Robustness by Examination of Selected Scenarios (STRESS). STRESS aims to cut the time and effort needed to explore pathological cases of a protocol during its design. This paper has two goals: (1) to describe the method, and (2) to serve as a case study of robustness analysis of multicast routing protocols. We aim to offer design tools similar to those used in CAD and VLSI design, and demonstrate how effective systematic simulation can be in studying protocol robustness.", "Network researchers must test Internet protocols under varied conditions to determine whether they are robust and reliable. The paper discusses the Virtual Inter Network Testbed (VINT) project which has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.", "Protocol design requires understanding state distributed across many nodes, complex message exchanges, and with competing traAEc. Traditional analysis tools (such as packet traces) too often hide protocol dynamics in a mass of extraneous detail. This paper presents nam , a network animator that provides packet-level animation and protocol-speci c graphs to aid the design and debugging of new network protocols. Taking data from network simulators (such as ns) or live networks, nam was one of the rst tools to provide general purpose, packet-level, network animation. Nam now integrates traditional time-event plots of protocol actions and scenario editing capabilities. We describe how nam visualizes protocol and network dynamics.", "" ] }
cs0208029
2953092023
Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.
The multiparadigm language Leda was developed for educational purposes @cite_117 . It is sequential, supports functional and object-oriented programming, and has basic support for backtracking and a simple form of logic programming that is a subset of Prolog.
{ "cite_N": [ "@cite_117" ], "mid": [ "1600021945" ], "abstract": [ "Offering an alternative approach to multiparadigm programming concepts, this work presents the four major language paradigms - imperative, object-oriented, functional and logical - through a new, common language called Leda. It: introduces the important emerging topic of multiparadigm programming - a concept that could be characterized as \"the best of\" programming languages; provides a coherent basis for comparing multiple paradigms in a common framework through a single language; and gives both a technical overview and summaries on important topics in programming-language development." ] }
cs0210015
2130323825
We consider the prospect of a processor that can perform interval arithmetic at the same speed as conventional floating-point arithmetic. This makes it possible for all arithmetic to be performed with the superior security of interval methods without any penalty in speed. In such a situation the IEEE floating-point standard needs to be compared with a version of floating-point arithmetic that is ideal for the purpose of interval arithmetic. Such a comparison requires a succinct and complete exposition of interval arithmetic according to its recent developments. We present such an exposition in this paper. We conclude that the directed roundings toward the infinities and the definition of division by the signed zeros are valuable features of the standard. Because the operations of interval arithmetic are always defined, exceptions do not arise. As a result neither Nans nor exceptions are needed. Of the status flags, only the inexact flag may be useful. Denormalized numbers seem to have no use for interval arithmetic; in the use of interval constraints, they are a handicap.
For most of the time since the beginning of interval arithmetic, two systems have coexisted. One was the official one, where intervals were bounded, and division by an interval containing zero was undefined. Recognizing the unpracticality of this approach, there was also a definition of extended'' interval arithmetic @cite_5 where these limitations were lifted. Representative of this state of affairs are the monographs by Hansen @cite_4 and Kearfott @cite_0 . However, here the specification of interval division is quite far from an efficient implementation that takes advantage of the IEEE floating-point standard. The specification is indirect via multiplication by the interval inverse. There is no consideration of the possibility of undefined operations: presumably one is to perform a test before each operation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "2129480103", "53281316", "2067648572" ], "abstract": [ "List of Figures. List of Tables. Preface. 1. Preliminaries. 2. Software Environments. 3. On Preconditioning. 4. Verified Solution of Nonlinear Systems. 5. Optimization. 6. Non-Differentiable Problems. 7. Use of Intermediate Quantities in the Expression Values. References. Index.", "", "Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic." ] }
cs0210024
1661438368
We introduce a new class of scheduling problems in which the optimization is performed by the worker (single "machine") who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is "lazy"), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of "perverse" scheduling preblems, which we denote "Lazy Bureaucrat Problems," gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.
Recently Hepner and Stein @cite_5 published a pseudo-polynomial-time algorithm for minimizing the makespan subject to preemption constraint II, thus resolving an open problem from an earlier version of this paper @cite_3 . They also extend the LBP to the parallel setting, in which there are multiple bureaucrats.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "1565331571", "2568731922" ], "abstract": [ "We study the problem of minimizing makespan for the Lazy Bureaucrat Scheduling Problem. We give a pseudopolynomial time algorithm for a preemptive scheduling problem, resolving an open problem by We also extend the definition of Lazy Bureaucrat scheduling to the multiple-bureaucrat (parallel) setting, and provide pseudopolynomial-time algorithms for problems in that model.", "We introduce a new class of scheduling problems in which the optimization is performed by the worker (single \"machine\") who performs the tasks. The worker's objective may be to minimize the amount of work he does (he is \"lazy\"). He is subject to a constraint that he must be busy when there is work that he can do; we make this notion precise, particularly when preemption is allowed. The resulting class of \"perverse\" scheduling problems, which we term \"Lazy Bureaucrat Problems,\" gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules." ] }
quant-ph0210020
2949678910
Given a Boolean function f, we study two natural generalizations of the certificate complexity C(f): the randomized certificate complexity RC(f) and the quantum certificate complexity QC(f). Using Ambainis' adversary method, we exactly characterize QC(f) as the square root of RC(f). We then use this result to prove the new relation R0(f) = O(Q2(f)^2 Q0(f) log n) for total f, where R0, Q2, and Q0 are zero-error randomized, bounded-error quantum, and zero-error quantum query complexities respectively. Finally we give asymptotic gaps between the measures, including a total f for which C(f) is superquadratic in QC(f), and a symmetric partial f for which QC(f) = O(1) yet Q2(f) = Omega(n log n).
@cite_2 studied a query complexity measure they called @math , for Merlin-Arthur. In our notation, @math equals the maximum of @math over all @math with @math . observed that @math , where @math is the number of queries needed given arbitrarily many rounds of interaction with a prover. They also used error-correcting codes to construct a total @math for which @math but @math . This has similarities to our construction, in Section , of a symmetric partial @math for which @math but @math . Aside from that and from Proposition , 's results do not overlap with ours.
{ "cite_N": [ "@cite_2" ], "mid": [ "2085180799" ], "abstract": [ "It is well known that probabilistic boolean decision trees cannot be much more powerful than deterministic ones (N. Nisan, SIAM J. Comput.20, No. 6 (1991), 999?1007). Motivated by a question if randomization can significantly speed up a nondeterministic computation via a boolean decision tree, we address structural properties of Arthur?Merlin games in this model and prove some lower bounds. We consider two cases of interest, the first when the length of communication between the players is limited and the second, if it is not. While in the first case we can carry over the relations between the corresponding Turing complexity classes, in the second case we observe in contrast with Turing complexity that a one-round Merlin?Arthur protocol is as powerful as a general interactive proof system and, in particular, can simulate a one-round Arthur?Merlin protocol. Moreover, we show that sometimes a Merlin?Arthur protocol can be more efficient than an Arthur?Merlin protocol and than a Merlin?Arthur protocol with limited communication. This is the case for a boolean function whose set of zeroes is a code with high minimum distance and a natural uniformity condition. Such functions provide an example when the Merlin?Arthur complexity is 1 with one-sided error ??(23, 1), but at the same time the nondeterministic decision tree complexity is ?(n). The latter should be contrasted with another fact we prove. Namely, if a function has Merlin?Arthur complexity 1 with one-sided error probability ??(0, 23, then its nondeterministic complexity is bounded by a constant. Other results of the paper include connections with the block sensitivity and related combinatorial properties of a boolean function." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
We now review loop checking in more details. To our best knowledge, among all existing loop checking mechanisms only OS-check @cite_18 , EVA-check @cite_13 and VAF-check @cite_14 are suitable for logic programs with function symbols. They rely on a structural characteristic of infinite SLD-derivations, namely, the growth of the size of some generated subgoals. This is what the following theorem states. Here, @math is a given function that maps an atom to its size which is defined in terms of the number of symbols appearing in the atom. As this theorem does not provide any sufficient condition to detect infinite SLD-derivations, the three loop checking mechanisms mentioned above may detect finite derivations as infinite. However, these mechanisms are they detect all infinite loops when the leftmost selection rule is used.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_13" ], "mid": [ "89799265", "1981314990", "1975546131" ], "abstract": [ "", "Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V.", "The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs. Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features: (1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality Subsumption check with the VA check, (4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the OS check and all the existing versions of the VA check." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
OS-check (for OverSize loop check) was first introduced by Shalin @cite_18 @cite_11 and was then formalized by Bol @cite_19 . It is based on a function @math that can have one of the three following definitions: for any atoms @math and @math , either @math , either @math (resp. @math ) is the count of symbols appearing in @math (resp. @math ), either @math if for each @math , the count of symbols of the @math -th argument of @math is smaller than or equal to that of the @math -th argument of @math . OS-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is , that has ancestor subgoals which have the same predicate symbol as @math and whose size is smaller than or equal to that of @math .
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "2135124683", "89799265", "1985933047" ], "abstract": [ "In the framework of Lloyd and Shepherdson [16], partial deduction involves the creation of SLDNF-trees for a given program and some goals up to certain halting points. This paper identifies the relation between halting criteria for partial deduction and loop checking (as formalized in [1]). For simplicity, we consider only positive programs and SLD-resolution here. It appears that loop checks for partial deduction must be complete, whereas traditionally, the soundness of a loop check is more important. However, it is also shown that sound loop checks can contribute to improve partial deduction. Finally, a class of complete loop checks suitable for partial deduction is identified.", "", "A partial evaluator for Prolog takes a program and a query and return a program specialized for all instances of that query. The intention is that the generated program will execute more efficiently than the original one for those instances." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
EVA-check (for Extented Variant Atoms loop check) was introduced by Shen @cite_13 . It is based on the notion of . EVA-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is a generalized variant of some of its ancestor @math , @math is a variant of @math except for some arguments whose size increases from @math to @math via a set of recursive clauses. Here the size function that is used applies to predicate arguments, to terms, and it is fixed: it is defined as the the count of symbols that appear in the terms. EVA-check is more reliable than OS-check because it is less likely to mis-identify infinite loops @cite_13 . This is mainly due to the fact that, unlike OS-check, EVA-check refers to the informative internal structure of subgoals.
{ "cite_N": [ "@cite_13" ], "mid": [ "1975546131" ], "abstract": [ "The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs. Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features: (1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality Subsumption check with the VA check, (4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the OS check and all the existing versions of the VA check." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
VAF-check (for Variant Atoms loop check for logic programs with Functions) was proposed by Shen @cite_14 . It is based on the notion of . VAF-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is an expanded variant of some of its ancestor @math , @math is a variant of @math except for some arguments @math such that: @math grows from @math to @math into a function containing @math , , @math grows from @math to @math into a function containing @math . VAF-check is as reliable as and more efficient than EVA-check @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "1981314990" ], "abstract": [ "Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V." ] }
cs0212045
1877185759
Community identification algorithms have been used to enhance the quality of the services perceived by its users. Although algorithms for community have a widespread use in the Web, their application to portals or specific subsets of the Web has not been much studied. In this paper, we propose a technique for local community identification that takes into account user access behavior derived from access logs of servers in the Web. The technique takes a departure from the existing community algorithms since it changes the focus of in terest, moving from authors to users. Our approach does not use relations imposed by authors (e.g. hyperlinks in the case of Web pages). It uses information derived from user accesses to a service in order to infer relationships. The communities identified are of great interest to content providers since they can be used to improve quality of their services. We also propose an evaluation methodology for analyzing the results obtained by the algorithm. We present two case studies based on actual data from two services: an online bookstore and an online radio. The case of the online radio is particularly relevant, because it emphasizes the contribution of the proposed algorithm to find out communities in an environment (i.e., streaming media service) without links, that represent the relations imposed by authors (e.g. hyperlinks in the case of Web pages).
A considerable amount of research has been developed on community identification over the Web. Most of the approaches focus on analyzing text content, considering vector-space models for the objects usually related to Information Retrieval @cite_7 , hyperlink structure connecting the pages @cite_5 @cite_12 @cite_4 , markup tags associated with the hyperlinks or the combination of the previously cited sources of information @cite_6 @cite_3 . Therefore, they are restricted to objects that contain implicit information provided by the authors.Our work, on the other hand, is based solely on user access behavior.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_6", "@cite_3", "@cite_5", "@cite_12" ], "mid": [ "", "197486081", "2005124845", "2079672501", "2020423193", "" ], "abstract": [ "", "Disclosed is a novel and improved magnetic separator or filter, as well as an apparatus for making such a magnetic separator. Each filter cell comprises a base plate and a plurality of fibrous filter elements made of magnetizable material and secured to the base plate. The filter elements extend in a mutually parallel spaced relation and substantially perpendicularly to the direction of flow of the fluid and to the plane of the magnetic field.", "Topic distillation is the analysis of hyperlink graph structure to identify mutually reinforcing authorities (popular pages) and hubs (comprehensive lists of links to authorities). Topic distillation is becoming common in Web search engines, but the best-known algorithms model the Web graph at a coarse grain, with whole pages as single nodes. Such models may lose vital details in the markup tag structure of the pages, and thus lead to a tightly linked irrelevant subgraph winning over a relatively sparse relevant subgraph, a phenomenon called topic drift or contamination . The problem gets especially severe in the face of increasingly complex pages with navigation panels and advertisement links. We present an enhanced topic distillation algorithm which analyzes text, the markup tag trees that constitute HTML pages, and hyperlinks between pages. It thereby identifies subtrees which have high text- and hyperlink-based coherence w.r.t. the query. These subtrees get preferential treatment in the mutual reinforcement process. Using over 50 queries, 28 from earlier topic distillation work, we analyzed over 700,000 pages and obtained quantitative and anecdotal evidence that the new algorithm reduces topic drift.", "This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity analysis based algorithm with content analysis. We identify three problems with the existing approach and devise algorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 over pure connectivity analysis.", "The World Wide Web grows through a decentralized, almost anarchic process, and this has resulted in a large hyperlinked corpus without the kind of logical organization that can be built into more tradit,ionally-created hypermedia. To extract, meaningful structure under such circumstances, we develop a notion of hyperlinked communities on the www t,hrough an analysis of the link topology. By invoking a simple, mathematically clean method for defining and exposing the structure of these communities, we are able to derive a number of themes: The communities can be viewed as containing a core of central, “authoritative” pages linked togh and they exhibit a natural type of hierarchical topic generalization that can be inferred directly from the pat,t,ern of linkage. Our investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.", "" ] }
cs0212045
1877185759
Community identification algorithms have been used to enhance the quality of the services perceived by its users. Although algorithms for community have a widespread use in the Web, their application to portals or specific subsets of the Web has not been much studied. In this paper, we propose a technique for local community identification that takes into account user access behavior derived from access logs of servers in the Web. The technique takes a departure from the existing community algorithms since it changes the focus of in terest, moving from authors to users. Our approach does not use relations imposed by authors (e.g. hyperlinks in the case of Web pages). It uses information derived from user accesses to a service in order to infer relationships. The communities identified are of great interest to content providers since they can be used to improve quality of their services. We also propose an evaluation methodology for analyzing the results obtained by the algorithm. We present two case studies based on actual data from two services: an online bookstore and an online radio. The case of the online radio is particularly relevant, because it emphasizes the contribution of the proposed algorithm to find out communities in an environment (i.e., streaming media service) without links, that represent the relations imposed by authors (e.g. hyperlinks in the case of Web pages).
Besides, we are considering community identification applied to a local context instead of the whole Web. Our approach aims to adapt the graph based community identification algorithm described in @cite_5 . Some modifications to @cite_5 that takes into account user information have already been proposed in @cite_10 . However, this work was not focused on the community identification capabilities of @cite_5 and also considered a different representation of user patterns.
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2020423193", "1968509530" ], "abstract": [ "The World Wide Web grows through a decentralized, almost anarchic process, and this has resulted in a large hyperlinked corpus without the kind of logical organization that can be built into more tradit,ionally-created hypermedia. To extract, meaningful structure under such circumstances, we develop a notion of hyperlinked communities on the www t,hrough an analysis of the link topology. By invoking a simple, mathematically clean method for defining and exposing the structure of these communities, we are able to derive a number of themes: The communities can be viewed as containing a core of central, “authoritative” pages linked togh and they exhibit a natural type of hierarchical topic generalization that can be inferred directly from the pat,t,ern of linkage. Our investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.", "Kleinberg’s HITS algorithm, a method of link analysis, uses the link structure of a network of webpages to assign authority and hub weights to each page. These weights are used to rank sources on a particular topic. We have found that certain tree-like web structures can lead the HITS algorithm to return either arbitrary or non-intuitive results. We give a characterization of these web structures. We present two modifications to the adjacency matrix input to the HITS algorithm. Exponentiated Input, our first modification, includes information not only on direct links but also on longer paths between pages. It resolves both limitations mentioned above. Usage Weighted Input, our second modification, weights links according to how often they were followed by users in a given time period; it incorporates user feedback without requiring direct user querying." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
Our aim is to generalize algorithmic information theory in order to understand the algorithmic feature of quantum mechanics. There are related works whose purpose is mainly to define the information content of an individual pure quantum state, i.e., to define the of the quantum state @cite_4 @cite_3 @cite_2 , while we will not make such an attempt in this paper.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2106534004", "2762038352", "1999580286" ], "abstract": [ "We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper-bounded and can be effectively approximated from above under certain conditions. With high probability, a quantum object is incompressible. Upper and lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on the no-cloning properties of quantum states. In the quantum situation complexity is not subadditive. We discuss some relations with \"no-cloning\" and \"approximate cloning\" properties.", "In this paper we give a definition for quantum Kolmogorov complexity. In the classical setting, the Kolmogorov complexity of a string is the length of the shortest program that can produce this string as its output. It is a measure of the amount of innate randomness (or information) contained in the string. We define the quantum Kolmogorov complexity of a qubit string as the length of the shortest quantum input to a universal quantum Turing machine that produces the initial qubit string with high fidelity. The definition of P. Vitanyi (2001, IEEE Trans. Inform. Theory47, 2464?2479) measures the amount of classical information, whereas we consider the amount of quantum information in a qubit string. We argue that our definition is a natural and accurate representation of the amount of quantum information contained in a quantum state. Recently, P. Gacs (2001, J. Phys. A: Mathematical and General34, 6859?6880) also proposed two measures of quantum algorithmic entropy which are based on the existence of a universal semidensity matrix. The latter definitions are related to Vitanyi's and the one presented in this article, respectively.", "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
In quantum mechanics, what is represented by a matrix is either a quantum state or a measurement operator. In this paper we generalize the universal probability to a matrix-valued function in different way from @cite_2 , and identify it with an analogue of a POVM. We do not stick to defining the information content of a quantum state. Instead, we focus our thoughts on applying algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure of quantum mechanics. In this line we have the above inequalities and .
{ "cite_N": [ "@cite_2" ], "mid": [ "1999580286" ], "abstract": [ "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
In each of @cite_4 and @cite_3 , the quantum Kolmogorov complexity of a qubit string was defined as a quantum generalization of the standard definition of classical Kolmogorov complexity; the length of the shortest input for the universal decoding algorithm @math to output a finite binary string. Both @cite_4 and @cite_3 adopt the as a universal decoding algorithm @math to output a quantum state in their definition. However, there is a difference between @cite_4 and @cite_3 with respect to the object which is allowed as an input to @math . That is, @cite_4 can only allow a classical binary string as an input, whereas @cite_3 can allow any qubit string. The works @cite_4 , @cite_3 , and @cite_2 are closely related to one another as shown in each of these works. In comparison with our work, since our work is, in essence, based on a generalization of the universal probability, the work @cite_2 is more related to our work than the works @cite_4 and @cite_3 . These two works may be related to our work via the work @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2106534004", "2762038352", "1999580286" ], "abstract": [ "We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper-bounded and can be effectively approximated from above under certain conditions. With high probability, a quantum object is incompressible. Upper and lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on the no-cloning properties of quantum states. In the quantum situation complexity is not subadditive. We discuss some relations with \"no-cloning\" and \"approximate cloning\" properties.", "In this paper we give a definition for quantum Kolmogorov complexity. In the classical setting, the Kolmogorov complexity of a string is the length of the shortest program that can produce this string as its output. It is a measure of the amount of innate randomness (or information) contained in the string. We define the quantum Kolmogorov complexity of a qubit string as the length of the shortest quantum input to a universal quantum Turing machine that produces the initial qubit string with high fidelity. The definition of P. Vitanyi (2001, IEEE Trans. Inform. Theory47, 2464?2479) measures the amount of classical information, whereas we consider the amount of quantum information in a qubit string. We argue that our definition is a natural and accurate representation of the amount of quantum information contained in a quantum state. Recently, P. Gacs (2001, J. Phys. A: Mathematical and General34, 6859?6880) also proposed two measures of quantum algorithmic entropy which are based on the existence of a universal semidensity matrix. The latter definitions are related to Vitanyi's and the one presented in this article, respectively.", "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity." ] }
0705.4604
1556312007
In this paper we present an algorithm for performing runtime verification of a bounded temporal logic over timed runs. The algorithm consists of three elements. First, the bounded temporal formula to be verified is translated into a monadic first-order logic over difference inequalities, which we call monadic difference logic. Second, at each step of the timed run, the monadic difference formula is modified by computing a quotient with the state and time of that step. Third, the resulting formula is checked for being a tautology or being unsatisfiable by a decision procedure for monadic difference logic. We further provide a simple decision procedure for monadic difference logic based on the data structure Difference Decision Diagrams. The algorithm is complete in a very strong sense on a subclass of temporal formulae characterized as homogeneously monadic and it is approximate on other formulae. The approximation comes from the fact that not all unsatisfiable or tautological formulae are recognised at the earliest possible time of the runtime verification. Contrary to existing approaches, the presented algorithms do not work by syntactic rewriting but employ efficient decision structures which make them applicable in real applications within for instance business software.
We take a different approach. By encoding the runtime problem as a satisfiability problem for a monadic first-order logic, we arrive at a different type of algorithm. This algorithm is capable of utilizing a powerful decision structure for difference logic which inherits some of the strengths of binary decision diagrams @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "2080267935" ], "abstract": [ "In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
There are three different tabling schemes, namely OLDT and SLG @cite_27 @cite_10 , CAT @cite_20 @cite_35 , and iteration-based tabling including linear tabling @cite_23 @cite_18 @cite_33 @cite_7 @cite_17 and DRA @cite_15 . SLG @cite_16 is a formalization based on OLDT for computing well-founded semantics for general programs with negation. The basic idea of using iterative deepening to compute fixpoints dates back to the ET* algorithm @cite_2 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_7", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "1574950491", "2950622282", "2096979400", "", "1522225310", "2169591289", "1528738961", "2155945137", "2070598037", "1997210046", "2173733616", "2151952683" ], "abstract": [ "For any LP system, tabling can be quite handy in a variety of tasks, especially if it is efficiently implemented and fully integrated in the language. Implementing tabling in Mercury poses special challenges for several reasons. First, Mercury is both semantically and culturally quite different from Prolog. While decreeing that tabled predicates must not include cuts is acceptable in a Prolog system, it is not acceptable in Mercury, since if-then-elses and existential quantification have sound semantics for stratified programs and are used very frequently both by programmers and by the compiler. The Mercury implementation thus has no option but to handle interactions of tabling with Mercury’s language features safely. Second, the Mercury implementation is vastly different from the WAM, and many of the differences (e.g. the absence of a trail) have significant impact on the implementation of tabling. In this paper, we describe how we adapted the copying approach to tabling to implement tabling in Mercury.", "Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG- resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.", "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "", "To resolve the search-incompleteness of depth-first logic program interpreters, a new interpretation method based on the tabulation technique is developed and modeled as a refinement to SLD resolution. Its search space completeness is proved, and a complete search strategy consisting of iterated stages of depth-first search is presented. It is also proved that for programs defining finite relations only, the method under an arbitrary search strategy is terminating and complete.", "Global SLS-resolution and SLG-resolution are two representative mechanisms for top-down evaluation of the well-founded semantics of general logic programs. Global SLS-resolution is linear but suffers from infinite loops and redundant computations. In contrast, SLG-resolution resolves infinite loops and redundant computations by means of tabling, but it is not linear. The distinctive advantage of a linear approach is that it can be implemented using a simple, efficient stack-based memory structure like that in Prolog. In this paper we present a linear tabulated resolution for the well-founded semantics, which resolves the problems of infinite loops and redundant computations while preserving the linearity. For non-floundering queries, the proposed method is sound and complete for general logic programs with the bounded-term-size property.", "", "Tabled logic programming (LP) systems have been applied to elegantly and quickly solving very complex problems (e.g., model checking). However, techniquescurren tly employed for incorporating tabling in an existing LP system are quite complex and require considerable change to the LP system. We present a simple technique for incorporating tabling in existing LP systems based on dynamically reordering clauses containing variant callsat run-time. Our simple technique allows tabled evaluation to be performed with a single SLD tree and without the use of complex operations such as freezing of stacks and heap. It can be incorporated in an existing logic programming system with a small amount of effort. Our scheme also facilitates exploitation of parallelism from tabled LP systems. Results of incorporating our scheme in the commercial ALS Prolog system are reported.", "SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: (a) it may not terminate due to infinite positive recursion; (b) it may be terminate due to infinite recursion through negation; and (c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address all three problems for goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying, called SLG resolution. It has three distinctive features: (i) SLG resolution is a partial deduction procedure, consisting of seven fundamental transformations. A query is transformed step by step into a set of answers. The use of transformations separates logical issues of query evaluation from procedural ones. SLG allows an arbitrary computation rule for selecting a literal from a rule body and an arbitrary control strategy for selecting transformations to apply. (ii) SLG resolution is sound and search space complete with respect to the well-founded partial model for all non-floundering queries, and preserves all three-valued stable models. To evaluate a query under differenc three-valued stable models, SLG resolution can be enhanced by further processing of the answers of subgoals relevant to a query. (iii) SLG resolution avoids both positive and negative loops and always terminates for programs with the bounded-term-size property. It has a polynomial time data complexity for well-founded negation of function-free programs. Through a delaying mechanism for handling ground negative literals involved in loops, SLG resolution avoids the repetition of any of its derivation steps. Restricted forms of SLG resolution are identified for definite, locally stratified, and modularly stratified programs, shedding light on the role each transformation plays.", "SLG resolution uses tabling to evaluate nonfloundering normal logic pr ograms according to the well-founded semantics. The SLG-WAM, which forms the engine of the XSB system, can compute in-memory recursive queries an order of magnitute faster than current deductive databases. At the same time, the SLG-WAM tightly intergrates Prolog code with tabled SLG code, and executes Prolog code with minimal overhead compared to the WAM. As a result, the SLG-WAM brings to logic programming important termination and complexity properties of deductive databases. This article describes the architecture of the SLG-WAM for a powerful class of programs, the class of fixed-order dynamically stratified programs . We offer a detailed description of the algorithms, data structures, and instructions that the SLG-WAM adds to the WAM, and a performance analysis of engine overhead due to the extensions.", "", "Semi-naive evaluation is an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers. The impact of this technique on top-down evaluation had been unknown. In this paper, we introduce semi-naive evaluation into linear tabling, a top-down resolution mechanism for tabled logic programs. We give the conditions for the technique to be safe and propose an optimization technique called early answer promotion to enhance its effectiveness. While semi-naive evaluation is not as effective in linear tabling as in bottom-up evaluation, it is worthwhile to be adopted. Our benchmarking shows that this technique gives significant speed-ups to some programs." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
In SLG-WAM, a consumer fails after it exhausts all the existing answers and its state is preserved by freezing the stack so that it can be reactivated after new answers are generated. The CAT approach does not freeze the stack but instead copies the stack segments between the consumer and its producer into a separate area so that backtracking can be done normally. The saved state is reinstalled after a new answer is generated. CHAT @cite_26 is a hybrid approach that combines SLG-WAM and CAT.
{ "cite_N": [ "@cite_26" ], "mid": [ "2118889869" ], "abstract": [ "The Copying Approach to Tabling, abbrv. CAT, is an alternative to SLG-WAM and based on total copying of the areas that the SLG-WAM freezes to preserve execution states of suspended computations. The disadvantage of CAT as pointed out in a previous paper is that in the worst case, CAT must copy so much that it becomes arbitrarily worse than the SLG-WAM. Remedies to this problem have been studied, but a completely satisfactory solution has not emerged. Here, a hybrid approach is presented: CHAT. Its design was guided by the requirement that for non-tabled (i.e. Prolog) execution no changes to the underlying WAM engine need to be made. CHAT combines certain features of the SLG-WAM with features of CAT, but also introduces a technique for freezing WAM stacks without the use of the SLG-WAM's freeze registers that is of independent interest. Empirical results indicate that CHAT is a better choice for implementing the control of tabling than SLG-WAM or CAT. However, programs with arbitrarily worse behaviour exist." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
The DRA method @cite_15 is also iteration based, but it identifies looping clauses dynamically and iterates the execution of looping clauses to compute fixpoints. While in linear tabling iteration is performed on only top-most looping subgoals, in DRA iteration is performed on every looping subgoal. In ET* @cite_2 , every tabled subgoal is iterated even if it does not occur in a loop. Besides the difference in answer consumption strategies and optimizations, the linear tabling scheme described in this paper differs from the original version @cite_33 @cite_18 in that followers fail after they exhaust their answers rather than steal their pioneers' choice points. This strategy is originally adopted in the DRA method.
{ "cite_N": [ "@cite_15", "@cite_18", "@cite_33", "@cite_2" ], "mid": [ "2155945137", "2950622282", "2096979400", "1528738961" ], "abstract": [ "Tabled logic programming (LP) systems have been applied to elegantly and quickly solving very complex problems (e.g., model checking). However, techniquescurren tly employed for incorporating tabling in an existing LP system are quite complex and require considerable change to the LP system. We present a simple technique for incorporating tabling in existing LP systems based on dynamically reordering clauses containing variant callsat run-time. Our simple technique allows tabled evaluation to be performed with a single SLD tree and without the use of complex operations such as freezing of stacks and heap. It can be incorporated in an existing logic programming system with a small amount of effort. Our scheme also facilitates exploitation of parallelism from tabled LP systems. Results of incorporating our scheme in the commercial ALS Prolog system are reported.", "Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG- resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.", "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "" ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
The two consumption strategies have been compared in XSB @cite_5 as two scheduling strategies. The lazy strategy is called local scheduling and the eager strategy is called single-stack scheduling . Another strategy, called batched scheduling , is similar to local scheduling but top-most looping subgoals do not have to wait until their clusters become complete to return answers. Their experimental results indicate that local scheduling constantly outperforms the other two strategies on stack space and can perform asymptotically better than the other two strategies on speed. The superior performance of local scheduling is attributed to the saving of freezing stack segments. Although our experiment confirms the good space performance of the lazy strategy, it gives a counterintuitive result that the eager strategy is as fast as the lazy strategy. This result implies that the cost of iterative evaluation is considerably smaller than that of freezing stack segments, and for predicates with cuts the eager strategy can be used without significant slow-down. In our tabling system, different answer consumption strategies can be used for different predicates. The tabling system described in @cite_31 also supports mixed strategies.
{ "cite_N": [ "@cite_5", "@cite_31" ], "mid": [ "1518621415", "2130683061" ], "abstract": [ "Tabled evaluations ensure termination of logic programs with finite models by keeping track of which subgoals have been called. Given several variant subgoals in an evaluation, only the first one encountered will use program clause resolution; the rest uses answer resolution. This use of answer resolution prevents infinite looping which happens in SLD. Given the asynchronicity of answer generation and answer return, tabling systems face an important scheduling choice not present in traditional top-down evaluation: How does the order of returning answers to consuming subgoals affect program efficiency.", "Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing answers to subgoals. During tabled execution, several decisions have to be made. These are determined by the scheduling strategy. Whereas a strategy can achieve very good performance for certain applications, for others it might add overheads and even lead to unacceptable inefficiency. The ability of using multiple strategies within the same evaluation can be a means of achieving the best possible performance. In this work, we present how the YapTab system was designed to support dynamic mixed-strategy evaluation of the two most successful tabling scheduling strategies: batched scheduling and local scheduling." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
Semi-naive optimization is a fundamental idea for reducing redundancy in bottom-up evaluation of logic database queries @cite_21 @cite_32 . As far as we know, its impact on top-down evaluation had been unknown before @cite_17 . OLDT @cite_27 and SLG @cite_10 do not need this technique since it is not iterative and the underlying delaying mechanism successfully avoids the repetition of any derivation step. An attempt has been made by Guo and Gupta @cite_15 to make incremental consumption of tabled answers possible in DRA. In their scheme, answers are also divided into three regions but answers are consumed incrementally as in bottom-up evaluation. Since no condition is given for the completeness and no experimental result is reported on the impact of the technique, we are unable to give a detailed comparison.
{ "cite_N": [ "@cite_21", "@cite_32", "@cite_27", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2132063146", "1588644244", "1522225310", "2155945137", "1997210046", "2151952683" ], "abstract": [ "This paper surveys and compares various strategies for processing logic queries in relational databases. The survey and comparison is limited to the case of Horn Clauses with evaluable predicates but without function symbols. The paper is organized in three parts. In the first part, we introduce the main concepts and definitions. In the second, we describe the various strategies. For each strategy, we give its main characteristics, its application range and a detailed description. We also give an example of a query evaluation. The third part of the paper compares the strategies on performance grounds. We first present a set of sample rules and queries which are used for the performance comparisons, and then we characterize the data. Finally, we give an analytical solution for each query rule system. Cost curves are plotted for specific configurations of the data.", "", "To resolve the search-incompleteness of depth-first logic program interpreters, a new interpretation method based on the tabulation technique is developed and modeled as a refinement to SLD resolution. Its search space completeness is proved, and a complete search strategy consisting of iterated stages of depth-first search is presented. It is also proved that for programs defining finite relations only, the method under an arbitrary search strategy is terminating and complete.", "Tabled logic programming (LP) systems have been applied to elegantly and quickly solving very complex problems (e.g., model checking). However, techniquescurren tly employed for incorporating tabling in an existing LP system are quite complex and require considerable change to the LP system. We present a simple technique for incorporating tabling in existing LP systems based on dynamically reordering clauses containing variant callsat run-time. Our simple technique allows tabled evaluation to be performed with a single SLD tree and without the use of complex operations such as freezing of stacks and heap. It can be incorporated in an existing logic programming system with a small amount of effort. Our scheme also facilitates exploitation of parallelism from tabled LP systems. Results of incorporating our scheme in the commercial ALS Prolog system are reported.", "SLG resolution uses tabling to evaluate nonfloundering normal logic pr ograms according to the well-founded semantics. The SLG-WAM, which forms the engine of the XSB system, can compute in-memory recursive queries an order of magnitute faster than current deductive databases. At the same time, the SLG-WAM tightly intergrates Prolog code with tabled SLG code, and executes Prolog code with minimal overhead compared to the WAM. As a result, the SLG-WAM brings to logic programming important termination and complexity properties of deductive databases. This article describes the architecture of the SLG-WAM for a powerful class of programs, the class of fixed-order dynamically stratified programs . We offer a detailed description of the algorithms, data structures, and instructions that the SLG-WAM adds to the WAM, and a performance analysis of engine overhead due to the extensions.", "Semi-naive evaluation is an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers. The impact of this technique on top-down evaluation had been unknown. In this paper, we introduce semi-naive evaluation into linear tabling, a top-down resolution mechanism for tabled logic programs. We give the conditions for the technique to be safe and propose an optimization technique called early answer promotion to enhance its effectiveness. While semi-naive evaluation is not as effective in linear tabling as in bottom-up evaluation, it is worthwhile to be adopted. Our benchmarking shows that this technique gives significant speed-ups to some programs." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
Semi-naive optimization does not solve all the problems of recomputation in linear tabling. Recall the Warren's example: Assume there is a very costly non-tabled subgoal preceding p(X,Z) , then the subgoal has to be executed in each iteration even with semi-naive optimization. This example demonstrates the acuteness of the problem of recomputation because the number of iterations needed to reach the fixpoint is not constant. One treatment would be to table the subgoal to avoid recomputation, as suggested in @cite_15 , but tabling extra predicates can cause other problems such as over consumption of table space.
{ "cite_N": [ "@cite_15" ], "mid": [ "2155945137" ], "abstract": [ "Tabled logic programming (LP) systems have been applied to elegantly and quickly solving very complex problems (e.g., model checking). However, techniquescurren tly employed for incorporating tabling in an existing LP system are quite complex and require considerable change to the LP system. We present a simple technique for incorporating tabling in existing LP systems based on dynamically reordering clauses containing variant callsat run-time. Our simple technique allows tabled evaluation to be performed with a single SLD tree and without the use of complex operations such as freezing of stacks and heap. It can be incorporated in an existing logic programming system with a small amount of effort. Our scheme also facilitates exploitation of parallelism from tabled LP systems. Results of incorporating our scheme in the commercial ALS Prolog system are reported." ] }
0705.3243
2102136055
Traceroute sampling is an important technique in exploring the internet router graph and the autonomous system graph. Although it is one of the primary techniques used in calculating statistics about the internet, it can introduce bias that corrupts these estimates. This paper reports on a theoretical and experimental investigation of a new technique to reduce the bias of traceroute sampling when estimating the degree distribution. We develop a new estimator for the degree of a node in a traceroute-sampled graph; validate the estimator theoretically in Erdos-Renyi graphs and, through computer experiments, for a wider range of graphs; and apply it to produce a new picture of the degree distribution of the autonomous system graph.
Internet mapping by traceroute sampling was pioneered by Pansiot and Grad in @cite_10 , and the scale-free nature of the degree distribution was observed by Faloutsos, Faloutsos, and Faloutsos in @cite_21 . Since 1998, the Cooperative Association for Internet Data Analysis (CAIDA) project has archived traceroute data that is collected daily @cite_17 . The bias introduced by traceroute sampling was identified in computer experiments by Lakhina, Byers, Crovella, and Xie in @cite_4 and Petermann and De Los Rios @cite_16 , and formally proven to hold in a model of one-monitor, all-target traceroute sample by Clauset and Moore @cite_2 and, in further generality, by Achlioptas, Clauset, Kempe, and Moore @cite_1 . Computer experiments by to Guillaume, Latapy, and Magoni @cite_14 and an analysis using the mean field approximation of statistical physics due to Dall'Asta, Alvarez-Hamelin, Barrat, V ' a zquez, and Vespignani @cite_11 argue that, despite the bias introduced by traceroute sampling, some sort of scale-free behavior can be inferred from the union of traceroute-sampled paths.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_11", "@cite_21", "@cite_1", "@cite_2", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1971795786", "2107648668", "1540064387", "2912257242", "2120511087", "1519020025", "2060730631", "2090223505", "" ], "abstract": [ "Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conducted to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, some statistical properties are very robust, and so the known values for the Internet may be considered as reliable. We validate our analysis using real-world data and experiments, and we discuss its implications.", "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Mapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.", "", "Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions.", "Department of Physics and Astronomy,University of New Mexico, Albuquerque NM 87131(aaron,moore)@cs.unm.edu(Dated: February 2, 2008)A great deal of effort has been spent measuring topological features of the Internet. However, itwas recently argued that sampling based on taking paths or traceroutes through the network froma small number of sources introduces a fundamental bias in the observed degree distribution. Weexamine this bias analytically and experimentally. For Erd˝os-R´enyirandom graphs with mean degreec, we show analytically that traceroute sampling gives an observed degree distribution P(k) ∼ k", "The increased availability of data on real networks has favoured an explosion of activity in the elaboration of models able to reproduce both qualitatively and quantitatively the measured properties. What has been less explored is the reliability of the data, and whether the measurement technique biases them. Here we show that tree-like explorations (similar in principle to traceroute) can indeed change the measured exponents of a scale-free network.", "Multicasting has an increasing importance for network applications such as groupware or videoconferencing. Several multicast routing protocols have been defined. However they cannot be used directly in the Internet since most inter-domain routers do no implement multicasting. Thus these protocols are mainly tested either on a small scale inside a domain, or through the Mbone, whose topology is not really the same as Internet topology. The purpose of this paper is to construct a graph using actual routes of the Internet, and then to use this graph to compare some parameters - delays, scaling in term of state or traffic concentration - of multicast routing trees constructed by different algorithms - source shortest path trees and shared trees.", "" ] }
0705.3243
2102136055
Traceroute sampling is an important technique in exploring the internet router graph and the autonomous system graph. Although it is one of the primary techniques used in calculating statistics about the internet, it can introduce bias that corrupts these estimates. This paper reports on a theoretical and experimental investigation of a new technique to reduce the bias of traceroute sampling when estimating the degree distribution. We develop a new estimator for the degree of a node in a traceroute-sampled graph; validate the estimator theoretically in Erdos-Renyi graphs and, through computer experiments, for a wider range of graphs; and apply it to produce a new picture of the degree distribution of the autonomous system graph.
In addition to traceroute sampling, maps of the AS graph have been generated in two different ways, using BGP tables and using the WHOIS database. A recent paper by Mahadevan, Krioukov, Fomenkov, Dimitropoulos, claffy, and Vahdat provides a detailed comparison of the graphs that result from each of these measurement techniques @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2139905147" ], "abstract": [ "We calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we how how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement hould enable researchers to validate their models against real data and to make more informed election of topology data sources for their specific needs" ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Partial Total Boolean Functions. For total functions, the one-way quantum communication complexity is nicely characterized or bounded below in several ways. Klauck @cite_22 characterized the one-way communication complexity of total Boolean functions by the number of different rows of the communication matrix in the exact setting, i.e., the success probability is one, and showed that it equals to the one-way deterministic communication complexity. Also, he gave a lower bound of bounded-error one-way quantum communication complexity of total Boolean functions by the VC dimension. Aaronson @cite_20 @cite_10 presented lower bounds of the one-way quantum communication complexity that are also applicable for partial Boolean functions. His lower bounds are given in terms of the deterministic or bounded-error classical communication complexity and the length of Bob's input, which are shown to be tight by using the partial Boolean function of @cite_11 .
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_20", "@cite_11" ], "mid": [ "2171253836", "2083425572", "2611161794", "2148488235" ], "abstract": [ "Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.", "", "Although a quantum state requires exponentially many classical bits to describe, the laws of quantum mechanics impose severe restrictions on how that state can be accessed. This paper shows in three settings that quantum messages have only limited advantages over classical ones. First, we show that BQP qpoly is contained in PP poly, where BQP qpoly is the class of problems solvable in quantum polynomial time, given a polynomial-size \"quantum advice state\" that depends only on the input length. This resolves a question of Buhrman, and means that we should not hope for an unrelativized separation between quantum and classical advice. Underlying our complexity result is a general new relation between deterministic and quantum one-way communication complexities, which applies to partial as well as total functions. Second, we construct an oracle relative to which NP is not contained in BQP qpoly. To do so, we use the polynomial method to give the first correct proof of a direct product theorem for quantum search. This theorem has other applications; for example, it can be used to fix a flawed result of Klauck about quantum time-space tradeoffs for sorting. Third, we introduce a new trace distance method for proving lower bounds on quantum one-way communication complexity. Using this method, we obtain optimal quantum lower bounds for two problems of Ambainis, for which no nontrivial lower bounds were previously known even for classical randomized protocols.", "We give an exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar- Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We give a number of applications of this separation. In particular, in the bounded-storage model of cryptography we exhibita scheme that is secure against adversaries with a certain amount of classical storage, but insecure against adversaries with a similar (or even much smaller) amount of quantum storage; in the setting of privacy amplification, we show that there are strong extractors that yield a classically secure key, but are insecure against a quantum adversary." ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Private-coin Public-coin Models. The exponential quantum classical separations in @cite_13 and @cite_11 still hold under the public-coin model where Alice and Bob share random coins, since the one-way classical public-coin model can be simulated by the one-way classical private-coin model with additional @math -bit communication @cite_19 . However, exponential quantum classical separation for total functions remains open for all of the bounded-error two-way, one-way and SMP models. Note that the public-coin model is too powerful in the unbounded-error model: we can easily see that the unbounded-error one-way (classical or quantum) communication complexity of any function (or relation) is @math with prior shared randomness.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2088263428", "203155997", "2148488235" ], "abstract": [ "Abstract We investigate the relative power of the common random string model vs. the private random string model in communication complexity. We show that the models are essentially equal.", "", "We give an exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar- Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We give a number of applications of this separation. In particular, in the bounded-storage model of cryptography we exhibita scheme that is secure against adversaries with a certain amount of classical storage, but insecure against adversaries with a similar (or even much smaller) amount of quantum storage; in the setting of privacy amplification, we show that there are strong extractors that yield a classically secure key, but are insecure against a quantum adversary." ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Unbounded-error Models. Since the seminal paper @cite_17 , the unbounded-error (classical) one-way communication complexity has been developed in the literature @cite_8 @cite_12 @cite_2 . (Note that in the classical setting, the difference of communication cost between one-way and two-way models is at most @math bit.) Klauck @cite_5 also studied a variant of the unbounded-error quantum and classical communication complexity, called the weakly unbounded-error communication complexity: the cost is communication (qu)bits plus @math where @math is the success probability. He characterized the discrepancy, a useful measure for bounded-error communication complexity @cite_18 , in terms of the weakly unbounded-error communication complexity.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_2", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "605043455", "2143355494", "2074760215", "2949863531", "1515707347", "2008671796" ], "abstract": [ "", "We prove a general lower bound on the complexity of unbounded error probabilistic communication protocols. This result improves on a lower bound for bounded error protocols from Krause (1996). As a simple consequence we get the, to our knowledge, first linear lower bound on the complexity of unbounded error probabilistic communication protocols for the functions defined by Hadamard matrices. We also give an upper bound on the margin of any embedding of a concept class in half spaces.", "This paper discusses theoretical limitations of classification systems that are based on feature maps and use a separating hyperplane in the feature space. In particular, we study the embeddability of a given concept class into a class of Euclidean half spaces of low dimension, or of arbitrarily large dimension but realizing a large margin. New bounds on the smallest possible dimension or on the largest possible margin are presented. In addition, we present new results on the rigidity of matrices and briefly mention applications in complexity and learning theory.", "We prove new lower bounds for bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz to the quantum case. Applying this method we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other lower bound methods based on the Fourier transform, notably showing that s (f) n , for the average sensitivity s (f) of a function f, yields a lower bound on the bounded error quantum communication complexity of f(x AND y XOR z), where x is a Boolean word held by Alice and y,z are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are O( n).", "Recently, Forster [7] proved a new lower bound on probabilistic communication complexity in terms of the operator norm of the communication matrix. In this paper, we want to exploit the various relations between communication complexity of distributed Boolean functions, geometric questions related to half space representations of these functions, and the computational complexity of these functions in various restricted models of computation. In order to widen the range of applicability of Forster's bound, we start with the derivation of a generalized lower bound. We present a concrete family of distributed Boolean functions where the generalized bound leads to a linear lower bound on the probabilistic communication complexity (and thus to an exponential lower bound on the number of Euclidean dimensions needed for a successful half space representation), whereas the old bound fails. We move on to a geometric characterization of the well known communication complexity class C-PP in terms of half space representations achieving a large margin. Our characterization hints to a close connection between the bounded error model of probabilistic communication complexity and the area of large margin classification. In the final section of the paper, we describe how our techniques can be used to prove exponential lower bounds on the size of depth-2 threshold circuits (with still some technical restrictions). Similar results can be obtained for read-k-times randomized ordered binary decision diagram and related models.", "Communication is a bottleneck in many distributed computations. In VLSI, communication constraints dictate lower bounds on the performance of chips. The two-processor information transfer model measures the communication requirements to compute functions. We study the unbounded error probabilistic version of this model. Because of its weak notion of correct output, we believe that this model measures the “intrinsic” communication complexity of functions. We present exact characterizations of the unbounded error communication complexity in terms of arrangements of hyperplanes and approximations of matrices. These characterizations establish the connection with certain classical problems in combinatorial geometry which are concerned with the configurations of points in d-dimensional real space. With the help of these characterizations, we obtain some upper and lower bounds on communication complexity. The upper bounds which we obtained for the functions—equality and verification of Hamming distance—are considerably better than their counterparts in the deterministic, the nondeterministic, and the bounded error probabilistic models. We also exhibit a function which has log n complexity. We present a counting argument to show that most functions have linear complexity. Further, we apply the logarithmic lower bound on communication complexity to obtain an Ω(n log n) bound on the time of 1-tape unbounded error probabilistic Turing machines. We believe that this is the first nontrivial lower bound obtained for such machines." ] }
0706.4298
1624353584
How to pass from local to global scales in anonymous networks? How to organize a selfstabilizing propagation of information with feedback. From the Angluin impossibility results, we cannot elect a leader in a general anonymous network. Thus, it is impossible to build a rooted spanning tree. Many problems can only be solved by probabilistic methods. In this paper we show how to use Unison to design a self-stabilizing barrier synchronization in an anonymous network. We show that the commuication structure of this barrier synchronization designs a self-stabilizing wave-stream, or pipelining wave, in anonymous networks. We introduce two variants of Wave: the strong waves and the wavelets. A strong wave can be used to solve the idempotent r-operator parametrized computation problem. A wavelet deals with k-distance computation. We show how to use Unison to design a self-stabilizing wave stream, a self-stabilizing strong wave stream and a self-stabilizing wavelet stream.
@cite_3 designs a self-stabilizing Barrier Synchonization algorithm in asynchronous anonymous complet networks. For the other topologies the authors use the network with a root, the program is not uniform, but only semi-uniform. An interesting question is to give a solution to this problem in a general connected asynchronous anonymous network. As far as we know, the algorithm @cite_2 is the only decentralised uniform wave algorithm for a general anonymous network. This algorithm requires that the processors know the diameter, or most simply a common upper bound @math of the diameter. This algorithm is not self-stabilizing.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1722576058", "149453019" ], "abstract": [ "We show how fault-tolerance can be effectively added to several types of faults in program computations that use barrier synchronization. We divide the faults that occur in practice into two classes, detectable and undetectable, and design a fully distributed program that tolerates the faults in both classes. Our program guarantees that every barrier is executed correctly even if detectable faults occur, and that eventually every barrier is executed correctly even if undetectable faults occur. Via analytical as well as simulation results we show that the cost of adding fault-tolerance is low, in part by comparing the times required by our program with that required by the corresponding fault-intolerant counterpart.", "Synchronization of ABD networks assertional verification distributed infimum approximation garbage collection." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
There are several works that use verification techniques, like model checking, to verify the correctness of a model. For instance, Napoli and Parente @cite_30 present a model-checking algorithm for Hierarchical Finite State Machines as an abstract DEVS model. They also focus on the generation of simulation configurations for DEVS, but as counter-examples obtained by the application of their model-checking algorithm. Another relevant and recent work involving verification techniques is @cite_12 where Saadawi and Wainer introduce a new extension to the DEVS formalism, called the Rational Time-Advance DEVS (RTA-DEVS). RTA-DEVS models can be formally checked with standard model-checking algorithm and tools. Further, they introduce a methodology to transform classic DEVS models to RTA-DEVS models, allowing formal verification of classic DEVS. Although model checking techniques are formally defined and they are useful to prove properties and theorems over a model, the main problem of such techniques is the so-called @cite_13 , i.e. the exponential blowup of the state space and variables in any real or practical system. This made almost impossible the use of such techniques in large projects, although model checking has been used in real projects.
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_12" ], "mid": [ "196245653", "1498432697", "2110806620" ], "abstract": [ "Recently there has been a great attention from the scientific community towards the use of the model-checking technique as a tool for test generation in the simulation field. This paper aims to provide a useful mean to get more insights along these lines. By applying recent results in the field of graded temporal logics, we present a new efficient model-checking algorithm for Hierarchical Finite State Machines (HSM), a well established symbolism long and widely used for representing hierarchical models of discrete systems. Performing model-checking against specifications expressed using graded temporal logics has the peculiarity of returning more counterexamples within a unique run. We think that this can greatly improve the efficacy of automatically getting test cases. In particular we verify two different models of HSM against branching time temporal properties.", "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.", "Real-time systems modeling and verification is a complex task. In many cases, formal methods have been employed to deal with the complexity of these systems, but checking those models is usually unfeasible. Modeling and simulation methods introduce a means of validating these model's specifications. In particular, Discrete Event System Specification (DEVS) models can be used for this purpose. Here, we introduce a new extension to the DEVS formalism, called the Rational Time-Advance DEVS (RTA-DEVS), which permits modeling the behavior of real-time systems that can be modeled by the classical DEVS; however, RTA-DEVS models can be formally checked with standard model-checking algorithms and tools. In order to do so, we introduce a procedure to create timed automata (TA) models that are behaviorally equivalent to the original RTA-DEVS models. This enables the use of the available TA tools and theories for formal model checking. Further, we introduce a methodology to transform classic DEVS models to RTA-DEVS models, thus enabling formal verification of classic DEVS with an acceptable accuracy." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
K. J. Hong and T. G. Kim @cite_31 introduce a method for the verification of discrete event models. They propose a formalism, Time State Reachability Graph (TSRG), to specify modules of a discrete event model and a methodology for the generation of test sequences to test such modules at an I O level. Later, a graph theoretical analysis of TSGR generates all possible timed I O sequences from which a test set of timed I O sequences with 100 Another recent work that applies verification techniques over discrete event simulation is @cite_24 where da Silva and de Melo presents a method to perform simulations orderly and verify properties about them using transitions systems. Both, the possible simulation paths and the property to be verified are described as transition systems. The verification is achieved by building a special kind of synchronous product between these two transition systems. They focused their work on the verification of properties by simulation but not on the generation of simulations in order to validate the model.
{ "cite_N": [ "@cite_24", "@cite_31" ], "mid": [ "2400188976", "2118645685" ], "abstract": [ "Discrete event simulations can be used to analyse natural and artificial phenomena. To this end, one provides models whose behaviours are characterized by discrete events in a discrete timeline. By running such a simulation, one can then observe its properties. This suggests the possibility of applying on-the-fly verification procedures during simulations. In this work we propose a method by which this can be accomplished. It consists in modelling the simulation as a a transition system (implicitly), and the property to be verified as another transition system (explicitly). The latter we call a simulation purpose and it is used both to verify the success of the property and to guide the simulation. Algorithmically, this corresponds to building a synchronous product of these two transitions systems on-the-fly and using it to operate a simulator. The precise nature of simulation purposes, as well as the corresponding verification algorithm, are largely determined by methodological considerations important for simulations.", "Model verification examines the correctness of a model implementation with respect to a model specification. While being described from model specification, implementation prepares to execute or evaluate a simulation model by a computer program. Viewing model verification as a program test this paper proposes a method for generation of test sequences that completely covers all possible behavior in specification at an I O level. Timed State Reachability Graph (TSRG) is proposed as a means of model specification. Graph theoretical analysis of TSRG has generated a test set of timed I O event sequences, which guarantees 100 test coverage of an implementation under test." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
@cite_23 present a framework to test DEVS tools. In their framework they combine black-box and white-box testing approaches. Actually, this work is not really related to ours because they do not validate or verify a DEVS model, whereas they test DEVS implementations. However, it is useful to see how they introduce software testing techniques in the DEVS world.
{ "cite_N": [ "@cite_23" ], "mid": [ "2395032752" ], "abstract": [ "The Discrete-Event system Specification (DEVS) is a widely used formalism for discrete-event modelling and simulation. A variety of DEVS modelling and simulation tools have been implemented. Diverse implementations with platform-specific characteristics and often tailored to specific problem domains need to be tested to ensure their compliance with the precise and formal DEVS formalism specification. Such compliance allows for meaningful exchange and re-use of models. It also allows for the correct comparison of simulator implementation performance and hence of specific implementation optimizations. In this paper, we focus on testing correctness and preciseness of DEVS implementations and propose a testing framework. Our testing framework combines black-box and white-box testing approaches and uses a standard XML representation for event- and state-traces (also known as segments). We apply our testing framework to Python-DEVS and DEVS++, two concrete implementations of the Classic DEVS formalism. Analysis of the test results reveals candidate items for improvement of the two tools. Finally, insights gained into DEVS standardization are discussed." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
Statistical distributions similar to the ones considered in this paper have been previously applied to characterize the dynamics of the behavior ow crowds of Web users. In an early contribution, @cite_29 analyzed browsing behaviors and found that the number of links a user is likely to follow on a Web site is distributed according to an inverse Gaussian. In @cite_34 , Wu and Huberman studied life-cycles of news items on social bookmarking site and found that the amount of attention novel content receives is distributed log-normally. The log-Normal distribution was also found to model sizes of cascades of messages passed through a peer-to-peer recommendation network @cite_25 or the number of messages exchanged in instant messaging services @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_34", "@cite_25" ], "mid": [ "2157579446", "2089192108", "2058465497", "2105535951" ], "abstract": [ "We present a study of anonymized data capturing a month of high-level communication activities within the whole of the Microsoft Messenger instant-messaging system. We examine characteristics and patterns that emerge from the collective dynamics of large numbers of people, rather than the actions and characteristics of individuals. The dataset contains summary properties of 30 billion conversations among 240 million people. From the data, we construct a communication graph with 180 million nodes and 1.3 billion undirected edges, creating the largest social network constructed and analyzed to date. We report on multiple aspects of the dataset and synthesized graph. We find that the graph is well-connected and robust to node removal. We investigate on a planetary-scale the oft-cited report that people are separated by \"six degrees of separation\" and find that the average path length among Messenger users is 6.6. We find that people tend to communicate more with each other when they have similar age, language, and location, and that cross-gender conversations are both more frequent and of longer duration than conversations with the same gender.", "One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.", "The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.", "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
While attention dynamics on shorter time scales have been modeled using random fields @cite_15 , structured models @cite_12 , or differential equations @cite_22 , long term temporal dynamics of collective attention have previously been modeled using mixtures of power-law and Poisson distributions @cite_4 or systems of differential equations @cite_30 @cite_25 which were inspired by techniques from the area of epidemic modeling @cite_26 @cite_9 . In this context, we note that the diffusion models considered in this paper also allow for interpretations in terms of the dynamics of elementary differential equations. For instance, the Weibull model in can be expressed as @math which hints at a similarity in spirit between economic diffusion and established epidemic models that seems to merit further research.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_22", "@cite_9", "@cite_15", "@cite_25", "@cite_12" ], "mid": [ "1977332720", "2963503659", "2042034885", "2127492100", "", "2056797132", "2105535951", "" ], "abstract": [ "Many cultural traits exhibit volatile dynamics, commonly dubbed fashions or fads. Here we show that realistic fashion-like dynamics emerge spontaneously if individuals can copy others' preferences for cultural traits as well as traits themselves. We demonstrate this dynamics in simple mathematical models of the diffusion, and subsequent abandonment, of a single cultural trait which individuals may or may not prefer. We then simulate the coevolution between many cultural traits and the associated preferences, reproducing power-law frequency distributions of cultural traits (most traits are adopted by few individuals for a short time, and very few by many for a long time), as well as correlations between the rate of increase and the rate of decrease of traits (traits that increase rapidly in popularity are also abandoned quickly and vice versa). We also establish that alternative theories, that fashions result from individuals signaling their social status, or from individuals randomly copying each other, do not satisfactorily reproduce these empirical observations.", "This paper is a survey paper on stochastic epidemic models. A simple stochastic epidemic model is defined and exact and asymptotic (relying on a large community) properties are presented. The purpose of modelling is illustrated by studying effects of vaccination and also in terms of inference procedures for important parameters, such as the basic reproduction number and the critical vaccination coverage. Several generalizations towards realism, e.g. multitype and household epidemic models, are also presented, as is a model for endemic diseases.", "We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.", "Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.", "", "User generated information in online communities has been characterized with the mixture of a text stream and a network structure both changing over time. A good example is a web-blogging community with the daily blog posts and a social network of bloggers. An important task of analyzing an online community is to observe and track the popular events, or topics that evolve over time in the community. Existing approaches usually focus on either the burstiness of topics or the evolution of networks, but ignoring the interplay between textual topics and network structures. In this paper, we formally define the problem of popular event tracking in online communities (PET), focusing on the interplay between texts and networks. We propose a novel statistical method that models the the popularity of events over time, taking into consideration the burstiness of user interest, information diffusion on the network structure, and the evolution of textual topics. Specifically, a Gibbs Random Field is defined to model the influence of historic status and the dependency relationships in the graph; thereafter a topic model generates the words in text content of the event, regularized by the Gibbs Random Field. We prove that two classic models in information diffusion and text burstiness are special cases of our model under certain situations. Empirical experiments with two different communities and datasets (i.e., Twitter and DBLP) show that our approach is effective and outperforms existing approaches.", "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.", "" ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
With respect to time series retrieved from Google Trends, epidemic models based on differential equations involving exogenous end endogenous influences have been discussed in @cite_4 . There, they were used as means of classifying, i.e. distinguishing, different types of attention dynamics. Trend analysis based on data from Google Trends was also performed in @cite_18 yet there the focus was on developing clustering algorithms to characterize different phases in search frequency data. The approaches in @cite_18 @cite_4 are thus related to what is reported here, however, in contrast to these contributions, we do not explicitly devise new models but consider simpler representations that implicitly account for different kinds of dynamics. Due to the simplicity of the diffusion models considered here and because of their apparent empirical validity and theoretical plausibility, the results reported in this paper therefore provide a new baseline for research on the mechanisms and long-term dynamics of collective attention on the Web.
{ "cite_N": [ "@cite_18", "@cite_4" ], "mid": [ "1971994493", "2042034885" ], "abstract": [ "The Social Web makes visible the ebb and flow of popular interest in topics both newsworthy (\"GulfSpill\") and trivial (\"Lolcat\"). Understanding this emergent behavior is a fundamental goal for Social Web research. Key problems include discovering emergent topics from online text sources, modeling burst activity, and predicting the future trajectory of a given topic. Past work has addressed such problems individually for specific applications, but has lacked a generalizable framework for performing both classification and prediction of topic usage. Our approach is to model a topic as a temporally ordered sequence of derived feature states and capture characteristic changes in the topic trend. These sequences are drawn from a dynamic segmentation of frequency data based on change point analysis. We employ Partitioning Around Medoids clustering on these segments to produce signatures which highlight characteristic patterns of usage growth and decay. We demonstrate how this signature model can be used to define distinctive classes of topics in multiple online contexts, including tagging systems and web-based information retrieval. Additionally, we show how the model can predict the general trajectory of interest in a particular topic.", "We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
In a delightful synchronicity, Cannarella and Spechler @cite_17 , Ribeiro @cite_19 , and we ourselves @cite_8 all published analyzes on how attention to social media evolves over time in early 2014. While @cite_17 was uploaded to arXiv, @cite_19 and @cite_17 were both presented at the International World Wide Web Conference in Seoul.
{ "cite_N": [ "@cite_19", "@cite_8", "@cite_17" ], "mid": [ "2950068977", "", "1674853078" ], "abstract": [ "Driven by outstanding success stories of Internet startups such as Facebook and The Huffington Post, recent studies have thoroughly described their growth. These highly visible online success stories, however, overshadow an untold number of similar ventures that fail. The study of website popularity is ultimately incomplete without general mechanisms that can describe both successes and failures. In this work we present six years of the daily number of users (DAU) of twenty-two membership-based websites - encompassing online social networks, grassroots movements, online forums, and membership-only Internet stores - well balanced between successes and failures. We then propose a combination of reaction-diffusion-decay processes whose resulting equations seem not only to describe well the observed DAU time series but also provide means to roughly predict their evolution. This model allows an approximate automatic DAU-based classification of websites into self-sustainable v.s. unsustainable and whether the startup growth is mostly driven by marketing & media campaigns or word-of-mouth adoptions.", "", "The last decade has seen the rise of immense online social networks (OSNs) such as MySpace and Facebook. In this paper we use epidemiological models to explain user adoption and abandonment of OSNs, where adoption is analogous to infection and abandonment is analogous to recovery. We modify the traditional SIR model of disease spread by incorporating infectious recovery dynamics such that contact between a recovered and infected member of the population is required for recovery. The proposed infectious recovery SIR model (irSIR model) is validated using publicly available Google search query data for \"MySpace\" as a case study of an OSN that has exhibited both adoption and abandonment phases. The irSIR model is then applied to search query data for \"Facebook,\" which is just beginning to show the onset of an abandonment phase. Extrapolating the best fit model into the future predicts a rapid decline in Facebook activity in the next few years." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
The work by Cannarella and Spechler from Princeton is noteworthy for triggering a brief but fierce media frenzy. Just as in the work presented here, the results in @cite_17 were obtained from analyzing Google Trends time series. Differing from our approach, Cannarella and Spechler considered epidemic models to analyze search frequency time series that indicate interest in services such as or . While this methodology had earlier been applied to analyze the temporal evolution of interest in Internet memes @cite_37 , Cannarella and Spechler caused a controversy, because they used their models to predict that would lose 80 Interestingly, our qualitative'' results in Fig. seem to corroborate Cannarella's and Spechler's predictions and we note that they were obtained from the same data but different models. In any case, we certainly agree with Develin's objection that predictions based on search frequency data have to be taken with a grain of salt. Yet, we disagree with his argument that social media related search interests of millions of Web users are not indicative of user engagement (see again our discussion in ) and note the curious absence of any direct engagement data in his reply.
{ "cite_N": [ "@cite_37", "@cite_17" ], "mid": [ "89990147", "1674853078" ], "abstract": [ "Internet memes are phenomena that rapidly gain popularity or notoriety on the Internet. Often, modifications or spoofs add to the profile of the original idea thus turning it into a phenomenon that transgresses social and cultural boundaries. It is commonly assumed that Internet memes spread virally but scientific evidence as to this assumption is scarce. In this paper, we address this issue and investigate the epidemic dynamics of 150 famous Internet memes. Our analysis is based on time series data that were collected from Google Insights, Delicious, Digg, and StumbleUpon. We find that differential equation models from mathematical epidemiology as well as simple log-normal distributions give a good account of the growth and decline of memes. We discuss the role of log-normal distributions in modeling Internet phenomena and touch on practical implications of our findings.", "The last decade has seen the rise of immense online social networks (OSNs) such as MySpace and Facebook. In this paper we use epidemiological models to explain user adoption and abandonment of OSNs, where adoption is analogous to infection and abandonment is analogous to recovery. We modify the traditional SIR model of disease spread by incorporating infectious recovery dynamics such that contact between a recovered and infected member of the population is required for recovery. The proposed infectious recovery SIR model (irSIR model) is validated using publicly available Google search query data for \"MySpace\" as a case study of an OSN that has exhibited both adoption and abandonment phases. The irSIR model is then applied to search query data for \"Facebook,\" which is just beginning to show the onset of an abandonment phase. Extrapolating the best fit model into the future predicts a rapid decline in Facebook activity in the next few years." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
However, data that directly reflects engagement played an important role in Ribeiro's analysis performed at CMU @cite_19 . He considered statistics available from , a subsidiary of which provides Web traffic data that are gathered using the toolbar, a plugin that volunteers install in their browsers so that can track which Web pages they access.
{ "cite_N": [ "@cite_19" ], "mid": [ "2950068977" ], "abstract": [ "Driven by outstanding success stories of Internet startups such as Facebook and The Huffington Post, recent studies have thoroughly described their growth. These highly visible online success stories, however, overshadow an untold number of similar ventures that fail. The study of website popularity is ultimately incomplete without general mechanisms that can describe both successes and failures. In this work we present six years of the daily number of users (DAU) of twenty-two membership-based websites - encompassing online social networks, grassroots movements, online forums, and membership-only Internet stores - well balanced between successes and failures. We then propose a combination of reaction-diffusion-decay processes whose resulting equations seem not only to describe well the observed DAU time series but also provide means to roughly predict their evolution. This model allows an approximate automatic DAU-based classification of websites into self-sustainable v.s. unsustainable and whether the startup growth is mostly driven by marketing & media campaigns or word-of-mouth adoptions." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
Given this discussion, the approach and results presented here mark a middle ground. On the one hand, we consider simple diffusion models rather than (intricate) models for the epidemic spread of novelties. On the other hand, the statistical basis for our analysis far exceeds those in @cite_17 @cite_19 . Neither Cannarella and Spechler nor Ribeiro consider country specific data and neither of them considers as large a number of different services than we do in this paper. Moreover, we see the main contribution of this paper not in the predictions in Fig. but rather in the empirical observation that collective attention to social media shows highly regular patterns of growth and decline regardless of region of origin or cultural background of crowds of Web users.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2950068977", "1674853078" ], "abstract": [ "Driven by outstanding success stories of Internet startups such as Facebook and The Huffington Post, recent studies have thoroughly described their growth. These highly visible online success stories, however, overshadow an untold number of similar ventures that fail. The study of website popularity is ultimately incomplete without general mechanisms that can describe both successes and failures. In this work we present six years of the daily number of users (DAU) of twenty-two membership-based websites - encompassing online social networks, grassroots movements, online forums, and membership-only Internet stores - well balanced between successes and failures. We then propose a combination of reaction-diffusion-decay processes whose resulting equations seem not only to describe well the observed DAU time series but also provide means to roughly predict their evolution. This model allows an approximate automatic DAU-based classification of websites into self-sustainable v.s. unsustainable and whether the startup growth is mostly driven by marketing & media campaigns or word-of-mouth adoptions.", "The last decade has seen the rise of immense online social networks (OSNs) such as MySpace and Facebook. In this paper we use epidemiological models to explain user adoption and abandonment of OSNs, where adoption is analogous to infection and abandonment is analogous to recovery. We modify the traditional SIR model of disease spread by incorporating infectious recovery dynamics such that contact between a recovered and infected member of the population is required for recovery. The proposed infectious recovery SIR model (irSIR model) is validated using publicly available Google search query data for \"MySpace\" as a case study of an OSN that has exhibited both adoption and abandonment phases. The irSIR model is then applied to search query data for \"Facebook,\" which is just beginning to show the onset of an abandonment phase. Extrapolating the best fit model into the future predicts a rapid decline in Facebook activity in the next few years." ] }
1406.6973
1562703808
Statements about entities occur everywhere, from newspapers and web pages to structured databases. Correlating references to entities across systems that use different identifiers or names for them is a widespread problem. In this paper, we show how shared knowledge between systems can be used to solve this problem. We present "reference by description", a formal model for resolving references. We provide some results on the conditions under which a randomly chosen entity in one system can, with high probability, be mapped to the same entity in a different system.
The research by ( @cite_1 , @cite_6 and @cite_0 ) are archetypal of the approaches that have been followed for solving this class of problems. Because of the simplicity of the model, much of the attention has focussed on the development of algorithms capable of correctly performing the matching between attributes. Further, most of the work has focussed on overcoming the lexical heterogeneity of the representation of the string values and on differences introduced by data acquisition and entry errors.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_6" ], "mid": [ "2159481891", "2108991785", "2085099553" ], "abstract": [ "Identity uncertainty is a pervasive problem in real-world data analysis. It arises whenever objects are not labeled with unique identifiers or when those identifiers may not be perceived perfectly. In such cases, two observations may or may not correspond to the same object. In this paper, we consider the problem in the context of citation matching—the problem of deciding which citations correspond to the same publication. Our approach is based on the use of a relational probability model to define a generative model for the domain, including models of author and title corruption and a probabilistic citation grammar. Identity uncertainty is handled by extending standard models to incorporate probabilities over the possible mappings between terms in the language and objects in the domain. Inference is based on Markov chain Monte Carlo, augmented with specific methods for generating efficient proposals when the domain contains many objects. Results on several citation data sets show that the method outperforms current algorithms for citation matching. The declarative, relational nature of the model also means that our algorithm can determine object characteristics such as author names by combining multiple citations of multiple papers.", "Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area", "The web contains a large quantity of unstructured information. In many cases, it is possible to heuristically extract structured information, but the resulting databases are \": they contain inconsistencies and duplication, and lack unique, consistently-used object identi ers. Examples include large bibliographic databases harvested from raw scienti c papers or databases constructed by merging heterogeneous \" databases. Here we formally model a soft database as a noisy version of some unknown hard database. We then consider the hardening problem, i.e., the problem of inferring the most likely underlying hard database given a particular soft database. A key feature of our approach is that hardening is global | many sources of evidence for a given hard fact are taken into account. We formulate hardening as an optimization problem and give a nontrivial nearly linear time algorithm for nding a local optimum." ] }
1708.04801
2746114819
Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
Delay SGD algorithms first appeared in 's work @cite_6 . In a delay SGD algorithm, current model parameters add the gradient of older model parameters in @math iterations ( @math is a random number where @math , in which @math is a constant). The iteration step for delay SGD algorithms is In the Hogwild! algorithm @cite_0 , under some restrictions, parallel SGD can be implemented in a lock-free style, which is robust to noise @cite_15 . However, these methods lead to the consequence that the convergence speed will be decreased by o( @math ). To ensure the delay is limited, communication overhead is unavoidable, which hurts performance. The trade-off in delay SGD is between delay, degree of parallelism, and system efficiency:
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_6" ], "mid": [ "2951781666", "2188647300", "" ], "abstract": [ "Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.", "We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods. In short, we show that for many stochastic approximation problems, as Freddie Mercury sings in Queen's Bohemian Rhapsody, \"Nothing really matters.\"", "" ] }
1708.04801
2746114819
Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
Along with parallel SGD algorithms, many other kinds of numerical optimization algorithms have been proposed, such as PASSCoDe @cite_19 and CoCoA @cite_7 . They share many new features, such as fast convergence speed in the end of training phase. Most of them are formulated from the dual coordinate descent (ascent) perspective, and hence can only be used for problems whose dual function can be computed. Moreover, traditional SGD still plays an important role in those algorithms.
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "2950002113", "2963861706" ], "abstract": [ "Stochastic Dual Coordinate Descent (SDCD) has become one of the most efficient ways to solve the family of @math -regularized empirical risk minimization problems, including linear SVM, logistic regression, and many others. The vanilla implementation of DCD is quite slow; however, by maintaining primal variables while updating dual variables, the time complexity of SDCD can be significantly reduced. Such a strategy forms the core algorithm in the widely-used LIBLINEAR package. In this paper, we parallelize the SDCD algorithms in LIBLINEAR. In recent research, several synchronized parallel SDCD algorithms have been proposed, however, they fail to achieve good speedup in the shared memory multi-core setting. In this paper, we propose a family of asynchronous stochastic dual coordinate descent algorithms (ASDCD). Each thread repeatedly selects a random dual variable and conducts coordinate updates using the primal variables that are stored in the shared memory. We analyze the convergence properties when different locking atomic mechanisms are applied. For implementation with atomic operations, we show linear convergence under mild conditions. For implementation without any atomic operations or locking, we present the first backward error analysis for ASDCD under the multi-core environment, showing that the converged solution is the exact solution for a primal problem with perturbed regularizer. Experimental results show that our methods are much faster than previous parallel coordinate descent solvers.", "Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate solution quality on average 25 × as quickly." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
The economics and development of underground markets have perhaps been first tackled by @cite_75 . On the other hand, @cite_65 showed that cybercrime economics are distinctively problematic in that the lack of effective rule enforcement mechanisms may hinder fair trading, and as a consequence the existence of the market itself. A few studies analyzed the evolution of cybercrime markets @cite_35 @cite_51 @cite_16 @cite_43 @cite_0 @cite_42 , and provided estimates of malware development @cite_22 and attack likelihood @cite_32 , but no quantitative account of economic factors such as exploit pricing and adoption are currently reported in the literature @cite_36 @cite_22 . In this paper we provide the first empirical quantification of these economic aspects by analyzing data collected first-handedly from a prominent cybercrime market.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_36", "@cite_42", "@cite_65", "@cite_32", "@cite_0", "@cite_43", "@cite_51", "@cite_16", "@cite_75" ], "mid": [ "2153532176", "2515415680", "2513861733", "2122551442", "", "2744879410", "2110271878", "1988015967", "1972791208", "2082180526", "2132280055" ], "abstract": [ "Over the last decade, the nature of cybercrime has transformed from naive vandalism to profit-driven, leading to the emergence of a global underground economy. A noticeable trend which has surfaced in this economy is the repeated use of forums to operate online stolen data markets. Using interaction data from three prominent carding forums: Shadowcrew, Cardersmarket and Darkmarket, this study sets out to understand why forums are repeatedly chosen to operate online stolen data markets despite numerous successful infiltrations by law enforcement in the past. Drawing on theories from criminology, social psychology, economics and network science, this study has identified four fundamental socio-economic mechanisms offered by carding forums: (1) formal control and coordination; (2) social networking; (3) identity uncertainty mitigation; (4) quality uncertainty mitigation. Together, they give rise to a sophisticated underground market regulatory system that facilitates underground trading over the Internet and thus drives the expansion of the underground economy.", "During the last decades, the problem of malicious and unwanted software (malware) has surged in numbers and sophistication. Malware plays a key role in most of today’s cyber attacks and has consolidated as a commodity in the underground economy. In this work, we analyze the evolution of malware since the early 1980s to date from a software engineering perspective. We analyze the source code of 151 malware samples and obtain measures of their size, code quality, and estimates of the development costs (effort, time, and number of people). Our results suggest an exponential increment of nearly one order of magnitude per decade in aspects such as size and estimated effort, with code quality metrics similar to those of regular software. Overall, this supports otherwise confirmed claims about the increasing complexity of malware and its production progressively becoming an industry.", "A software defect that exposes a software system to a cyber security attack is known as a software vulnerability. A software security exploit is an engineered software solution that successfully exploits the vulnerability. Exploits are used to break into computer systems, but exploits are currently used also for security testing, security analytics, intrusion detection, consultation, and other legitimate and legal purposes. A well-established market emerged in the 2000s for software vulnerabilities. The current market segments populated by small and medium-sized companies exhibit signals that may eventually lead to a similar industrialization of software exploits. To these ends and against these industry trends, this paper observes the first online market place for trading exploits between buyers and sellers. The paper adopts three different perspectives to study the case. The paper (a) portrays the studied exploit market place against the historical background in the software security industry. A qualitative assessment is made to (b) evaluate the case against the common characteristics of traditional online market places. The qualitative observations are used in the quantitative part (c) for predicting the price of exploits with partial least squares regression. The results show that (i) the case is unique from a historical perspective, although (ii) the online market place characteristics are familiar. The regression estimates also indicate that (iii) the pricing of exploits is only partially dependent on such factors as the targeted platform, the date of disclosure of the exploited vulnerability, and the quality assurance service provided by the market place provider. The results allow to contemplate (iv) practical means for enhancing the market place.", "Underground forums, where participants exchange information on abusive tactics and engage in the sale of illegal goods and services, are a form of online social network (OSN). However, unlike traditional OSNs such as Facebook, in underground forums the pattern of communications does not simply encode pre-existing social relationships, but instead captures the dynamic trust relationships forged between mutually distrustful parties. In this paper, we empirically characterize six different underground forums --- BlackHatWorld, Carders, HackSector, HackE1ite, Freehack, and L33tCrew --- examining the properties of the social networks formed within, the content of the goods and services being exchanged, and lastly, how individuals gain and lose trust in this setting.", "", "Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two†stage attacks whereby the attacker first breaches an Internet†facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of “weaponized†vulnerabilities he she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach.", "The rise of cybercrime in the last decade is an economic case of individuals responding to monetary and psychological incentives. Two main drivers for cybercrime can be identi fied: the potential gains from cyberattacks are increasing with the growth of importance of the Internet, and malefactors' expected costs (e.g., the penalties and the likelihood of being apprehended and prosecuted) are frequently lower compared with traditional crimes. In short, computer-mediated crimes are more convenient, and pro table, and less expensive and risky than crimes not mediated by the Internet. The increase in cybercriminal activities, coupled with ineff ective legislation and ineffective law enforcement pose critical challenges for maintaining the trust and security of ourcomputer infrastructures.Modern computer attacks encompass a broad spectrum of economic activity, where various malfeasants specialize in developing speci c goods (exploits, botnets, mailers) and services (distributing malware, monetizing stolen credentials, providing web hosting, etc.). A typical Internet fraud involves the actions of many of these individuals, such as malware writers, botnet herders, spammers, data brokers, and money launderers.Assessing the relationships among various malfeasants is an essential piece of information for discussing economic, technical, and legal proposals to address cybercrime. This paper presents a framework for understanding the interactions between these individuals and how they operate. We follow three steps.First, we present the general architecture of common computer attacks, and discuss the flow of goods and services that supports the underground economy. We discuss the general flow of resources between criminal groups and victims, and the interactions between diff erent specialized cybercriminals.Second, we describe the need to estimate the social costs of cybercrime and the profi ts of cybercriminals in order to identify optimal levels of protection. One of the main problems in quantifying the precise impact of cybercrime is that computer attacks are not always detected, or reported. Therefore we propose the need to develop a more systematic and transparent way of reporting computer breaches and their eff ects.Finally, we propose some possible countermeasures against criminal activities. In particular, we analyze the role private and public protection, and the incentives of multiple stake holders.", "Cybercrime's tentacles reach deeply into the Internet. A complete, underground criminal economy has developed that lets malicious actors steal money through the Web. The authors detail this enterprise, showing how information, expertise, and money flow through it. Understanding the underground economy's structure is critical for fighting it.", "Cybercrime activities are supported by infrastructures and services originating from an underground economy. The current understanding of this phenomenon is that the cybercrime economy ought to be fraught with information asymmetry and adverse selection problems. They should make the effects that we observe every day impossible to sustain. In this paper, we show that the market structure and design used by cyber criminals have evolved toward a market design that is similar to legitimate, thriving, online forum markets such as eBay. We illustrate this evolution by comparing the market regulatory mechanisms of two underground forum markets: 1) a failed market for credit cards and other illegal goods and 2) another, extremely active marketplace for vulnerabilities, exploits, and cyber attacks in general. The comparison shows that cybercrime markets evolved from unruly, scam for scammers market mechanisms to mature, regulated mechanisms that greatly favors trade efficiency.", "We investigate the emergence of the exploit-as-a-service model for driveby browser compromise. In this regime, attackers pay for an exploit kit or service to do the \"dirty work\" of exploiting a victim's browser, decoupling the complexities of browser and plugin vulnerabilities from the challenges of generating traffic to a website under the attacker's control. Upon a successful exploit, these kits load and execute a binary provided by the attacker, effectively transferring control of a victim's machine to the attacker. In order to understand the impact of the exploit-as-a-service paradigm on the malware ecosystem, we perform a detailed analysis of the prevalence of exploit kits, the families of malware installed upon a successful exploit, and the volume of traffic that malicious web sites receive. To carry out this study, we analyze 77,000 malicious URLs received from Google Safe Browsing, along with a crowd-sourced feed of blacklisted URLs known to direct to exploit kits. These URLs led to over 10,000 distinct binaries, which we ran in a contained environment. Our results show that many of the most prominent families of malware now propagate through driveby downloads--32 families in all. Their activities are supported by a handful of exploit kits, with Blackhole accounting for 29 of all malicious URLs in our data, followed in popularity by Incognito. We use DNS traffic from real networks to provide a unique perspective on the popularity of malware families based on the frequency that their binaries are installed by drivebys, as well as the lifetime and popularity of domains funneling users to exploits.", "This paper studies an active underground economy which specializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs collected from an active underground market operating on public Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societ al subs trate mature enough to steal wealth into the millions of dollars in less than one year." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
Recent work studied the services and monetization schemes of cyber criminals, e.g. to launder money through acquisition of expensive goods @cite_50 , or renting infected systems @cite_16 @cite_74 . The provision of the technological means by which these attacks are perpetrated remain however relatively unexplored @cite_36 , with the exception of a few technical insights from industrial reports @cite_38 @cite_57 . Similarly, a few studies estimated the economic effects of cybercrime activities on the real-world economy, for example by analyzing the monetization of stolen credit cards and banking information @cite_72 , the realization of profits from spam campaigns @cite_6 , the registration of fake online accounts @cite_5 , and the provision of booter services for distributed denial of service attacks @cite_46 . However, a characterization of the costs of the technology (as opposed to the earnings it generates), and the relation of trade factors on the realization of an attack is still missing. This work provides a first insight on the value of vulnerability exploits in the underground markets, and the effects of market factors on presence of attacks in the wild.
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_6", "@cite_57", "@cite_72", "@cite_74", "@cite_50", "@cite_5", "@cite_46", "@cite_16" ], "mid": [ "", "2513861733", "2091692346", "", "1412796528", "2345976710", "2014005029", "1815362064", "2164322841", "2082180526" ], "abstract": [ "", "A software defect that exposes a software system to a cyber security attack is known as a software vulnerability. A software security exploit is an engineered software solution that successfully exploits the vulnerability. Exploits are used to break into computer systems, but exploits are currently used also for security testing, security analytics, intrusion detection, consultation, and other legitimate and legal purposes. A well-established market emerged in the 2000s for software vulnerabilities. The current market segments populated by small and medium-sized companies exhibit signals that may eventually lead to a similar industrialization of software exploits. To these ends and against these industry trends, this paper observes the first online market place for trading exploits between buyers and sellers. The paper adopts three different perspectives to study the case. The paper (a) portrays the studied exploit market place against the historical background in the software security industry. A qualitative assessment is made to (b) evaluate the case against the common characteristics of traditional online market places. The qualitative observations are used in the quantitative part (c) for predicting the price of exploits with partial least squares regression. The results show that (i) the case is unique from a historical perspective, although (ii) the online market place characteristics are familiar. The regression estimates also indicate that (iii) the pricing of exploits is only partially dependent on such factors as the targeted platform, the date of disclosure of the exploited vulnerability, and the quality assurance service provided by the market place provider. The results allow to contemplate (iv) practical means for enhancing the market place.", "The \"conversion rate\" of spam--the probability that an unsolicited e-mail will ultimately elicit a \"sale\"--underlies the entire spam value proposition. However, our understanding of this critical behavior is quite limited, and the literature lacks any quantitative study concerning its true value. In this paper we present a methodology for measuring the conversion rate of spam. Using a parasitic infiltration of an existing botnet's infrastructure, we analyze two spam campaigns: one designed to propagate a malware Trojan, the other marketing on-line pharmaceuticals. For nearly a half billion spam e-mails we identify the number that are successfully delivered, the number that pass through popular anti-spam filters, the number that elicit user visits to the advertised sites, and the number of \"sales\" and \"infections\" produced.", "", "This chapter documents what we believe to be the first systematic study of the costs of cybercrime. The initial workshop paper was prepared in response to a request from the UK Ministry of Defence following scepticism that previous studies had hyped the problem. For each of the main categories of cybercrime we set out what is and is not known of the direct costs, indirect costs and defence costs – both to the UK and to the world as a whole. We distinguish carefully between traditional crimes that are now “cyber” because they are conducted online (such as tax and welfare fraud); transitional crimes whose modus operandi has changed substantially as a result of the move online (such as credit card fraud); new crimes that owe their existence to the Internet; and what we might call platform crimes such as the provision of botnets which facilitate other crimes rather than being used to extract money from victims directly. As far as direct costs are concerned, we find that traditional offences such as tax and welfare fraud cost the typical citizen in the low hundreds of pounds euros dollars a year; transitional frauds cost a few pounds euros dollars; while the new computer crimes cost in the tens of pence cents. However, the indirect costs and defence costs are much higher for transitional and new crimes. For the former they may be roughly comparable to what the criminals earn, while for the latter they may be an order of magnitude more. As a striking example, the botnet behind a third of the spam sent in 2010 earned its owners around $2.7 million, while worldwide expenditures on spam prevention probably exceeded a billion dollars. We are extremely inefficient at fighting cybercrime; or to put it another way, cyber-crooks are like terrorists or met al thieves in that their activities impose disproportionate costs on society. Some of the reasons for this are well-known: cybercrimes are global and have strong externalities, while traditional crimes such as burglary and car theft are local, and the associated equilibria have emerged after many years of optimisation. As for the more direct question of what should be done, our figures suggest that we should spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more in response – that is, on the prosaic business of hunting down cyber-criminals and throwing them in jail.", "ABSTRACTThis research uses differential association, techniques of neutralization, and rational choice theory to study those who operate “booter services”: websites that illegally offer denial-of-service attacks for a fee. Booter services provide “easy money” for the young males that run them. The operators claim they provide legitimate services for network testing, despite acknowledging that their services are used to attack other targets. Booter services are advertised through the online communities where the skills are learned and definitions favorable toward offending are shared. Some financial services proactively frustrate the provision of booter services, by closing the accounts used for receiving payments.", "Credit card fraud has seen rampant increase in the past years, as customers use credit cards and similar financial instruments frequently. Both online and brick-and-mortar outfits repeatedly fall victim to cybercriminals who siphon off credit card information in bulk. Despite the many and creative ways that attackers use to steal and trade credit card information, the stolen information can rarely be used to withdraw money directly, due to protection mechanisms such as PINs and cash advance limits. As such, cybercriminals have had to devise more advanced monetization schemes to work around the current restrictions. One monetization scheme that has been steadily gaining traction are reshipping scams. In such scams, cybercriminals purchase high-value or highly-demanded products from online merchants using stolen payment instruments, and then ship the items to a credulous citizen. This person, who has been recruited by the scammer under the guise of \"work-from-home\" opportunities, then forwards the received products to the cybercriminals, most of whom are located overseas. Once the goods reach the cybercriminals, they are then resold on the black market for an illicit profit. Due to the intricacies of this kind of scam, it is exceedingly difficult to trace, stop, and return shipments, which is why reshipping scams have become a common means for miscreants to turn stolen credit cards into cash. In this paper, we report on the first large-scale analysis of reshipping scams, based on information that we obtained from multiple reshipping scam websites. We provide insights into the underground economy behind reshipping scams, such as the relationships among the various actors involved, the market size of this kind of scam, and the associated operational churn. We find that there exist prolific reshipping scam operations, with one having shipped nearly 6,000 packages in just 9 months of operation, exceeding 7.3 million US dollars in yearly revenue, contributing to an overall reshipping scam revenue of an estimated 1.8 billion US dollars per year. Finally, we propose possible approaches to intervene and disrupt reshipping scam services.", "As web services such as Twitter, Facebook, Google, and Yahoo now dominate the daily activities of Internet users, cyber criminals have adapted their monetization strategies to engage users within these walled gardens. To facilitate access to these sites, an underground market has emerged where fraudulent accounts - automatically generated credentials used to perpetrate scams, phishing, and malware - are sold in bulk by the thousands. In order to understand this shadowy economy, we investigate the market for fraudulent Twitter accounts to monitor prices, availability, and fraud perpetrated by 27 merchants over the course of a 10-month period. We use our insights to develop a classifier to retroactively detect several million fraudulent accounts sold via this marketplace, 95 of which we disable with Twitter's help. During active months, the 27 merchants we monitor appeared responsible for registering 10-20 of all accounts later flagged for spam by Twitter, generating $127-459K for their efforts.", "In this paper, we investigate the phenomenon of lowcost DDoS-As-a-Service also known as Booter services. While we are aware of the existence of the underground economy of Booters, we do not have much insight into their internal operations, including the users of such services, the usage patterns, the attack infrastructure, and the victims [6]. In this paper, we present a brief analysis on the operations of a Booter known as TwBooter based on a publicly-leaked dump of their operational database. This data includes the attack infrastructure used for mounting attacks, details on service subscribers, and the targets of attacks. Our analysis reveals that this service earned over $7,500 a month and was used to launch over 48,000 DDoS attacks against 11,000 distinct victims including government websites and news sites in less than two months of operation.", "We investigate the emergence of the exploit-as-a-service model for driveby browser compromise. In this regime, attackers pay for an exploit kit or service to do the \"dirty work\" of exploiting a victim's browser, decoupling the complexities of browser and plugin vulnerabilities from the challenges of generating traffic to a website under the attacker's control. Upon a successful exploit, these kits load and execute a binary provided by the attacker, effectively transferring control of a victim's machine to the attacker. In order to understand the impact of the exploit-as-a-service paradigm on the malware ecosystem, we perform a detailed analysis of the prevalence of exploit kits, the families of malware installed upon a successful exploit, and the volume of traffic that malicious web sites receive. To carry out this study, we analyze 77,000 malicious URLs received from Google Safe Browsing, along with a crowd-sourced feed of blacklisted URLs known to direct to exploit kits. These URLs led to over 10,000 distinct binaries, which we ran in a contained environment. Our results show that many of the most prominent families of malware now propagate through driveby downloads--32 families in all. Their activities are supported by a handful of exploit kits, with Blackhole accounting for 29 of all malicious URLs in our data, followed in popularity by Incognito. We use DNS traffic from real networks to provide a unique perspective on the popularity of malware families based on the frequency that their binaries are installed by drivebys, as well as the lifetime and popularity of domains funneling users to exploits." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
The presence of a cybcercrime economy that absorbs vulnerabilities and generates attacks motivated the security community to study the devision of legitimate' vulnerability markets that attract security researchers away from the illegal marketplaces @cite_44 . Whereas several market mechanisms have been proposed @cite_31 @cite_41 , their effectiveness in deterring attacks is not clear @cite_2 @cite_63 @cite_41 . The so-called responsible vulnerability disclosure is incentivized by the presence of multiple bug-hunting programs by several providers such as Google, Facebook, and Microsoft , or umbrella' organizations that coordinate vulnerability reporting and disclosure @cite_3 @cite_36 @cite_44 . It is however unclear how do these compare against the cybercrime economy, as several key parameters such as exploit pricing in the underground are currently unknown. Further, it remains uncertain whether the adoption of vulnerability disclosure mechanisms has a clear effect on risk of attack in the wild @cite_2 . This study fills this gap by providing an empirical analysis of exploit pricing in the underground, and evaluating the effect of cybercrime market factors on the actual realization of attacks in the wild.
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_3", "@cite_44", "@cite_63", "@cite_2", "@cite_31" ], "mid": [ "2117405938", "2513861733", "1427242644", "2021348304", "", "", "2107449619" ], "abstract": [ "Software vulnerability disclosure has become a critical area of concern for policymakers. Traditionally, a Computer Emergency Response Team (CERT) acts as an infomediary between benign identifiers (who voluntarily report vulnerability information) and software users. After verifying a reported vulnerability, CERT sends out a public advisory so that users can safeguard their systems against potential exploits. Lately, firms such as iDefense have been implementing a new market-based approach for vulnerability information. The market-based infomediary provides monetary rewards to identifiers for each vulnerability reported. The infomediary then shares this information with its client base. Using this information, clients protect themselves against potential attacks that exploit those specific vulnerabilities.The key question addressed in our paper is whether movement toward such a market-based mechanism for vulnerability disclosure leads to a better social outcome. Our analysis demonstrates that an active unregulated market-based mechanism for vulnerabilities almost always underperforms a passive CERT-type mechanism. This counterintuitive result is attributed to the market-based infomediary's incentive to leak the vulnerability information inappropriately. If a profit-maximizing firm is not allowed to (or chooses not to) leak vulnerability information, we find that social welfare improves. Even a regulated market-based mechanism performs better than a CERT-type one, but only under certain conditions. Finally, we extend our analysis and show that a proposed mechanism--federally funded social planner--always performs better than a market-based mechanism.", "A software defect that exposes a software system to a cyber security attack is known as a software vulnerability. A software security exploit is an engineered software solution that successfully exploits the vulnerability. Exploits are used to break into computer systems, but exploits are currently used also for security testing, security analytics, intrusion detection, consultation, and other legitimate and legal purposes. A well-established market emerged in the 2000s for software vulnerabilities. The current market segments populated by small and medium-sized companies exhibit signals that may eventually lead to a similar industrialization of software exploits. To these ends and against these industry trends, this paper observes the first online market place for trading exploits between buyers and sellers. The paper adopts three different perspectives to study the case. The paper (a) portrays the studied exploit market place against the historical background in the software security industry. A qualitative assessment is made to (b) evaluate the case against the common characteristics of traditional online market places. The qualitative observations are used in the quantitative part (c) for predicting the price of exploits with partial least squares regression. The results show that (i) the case is unique from a historical perspective, although (ii) the online market place characteristics are familiar. The regression estimates also indicate that (iii) the pricing of exploits is only partially dependent on such factors as the targeted platform, the date of disclosure of the exploited vulnerability, and the quality assurance service provided by the market place provider. The results allow to contemplate (iv) practical means for enhancing the market place.", "We perform an empirical study to better understand two well-known vulnerability rewards programs, or VRPs, which software vendors use to encourage community participation in finding and responsibly disclosing software vulnerabilities. The Chrome VRP has cost approximately @math 570,000 over the last 3 years and has yielded 190 bounties. 28 of Chrome's patched vulnerabilities appearing in security advisories over this period, and 24 of Firefox's, are the result of VRP contributions. Both programs appear economically efficient, comparing favorably to the cost of hiring full-time security researchers. The Chrome VRP features low expected payouts accompanied by high potential payouts, while the Firefox VRP features fixed payouts. Finding vulnerabilities for VRPs typically does not yield a salary comparable to a full-time job; the common case for recipients of rewards in either program is that they have received only one reward. Firefox has far more critical-severity vulnerabilities than Chrome, which we believe is attributable to an architectural difference between the two browsers.", "In recent years, many organizations have established bounty programs that attract white hat hackers who contribute vulnerability reports of web systems. In this paper, we collect publicly available data of two representative web vulnerability discovery ecosystems (Wooyun and HackerOne) and study their characteristics, trajectory, and impact. We find that both ecosystems include large and continuously growing white hat communities which have provided significant contributions to organizations from a wide range of business sectors. We also analyze vulnerability trends, response and resolve behaviors, and reward structures of participating organizations. Our analysis based on the HackerOne dataset reveals that a considerable number of organizations exhibit decreasing trends for reported web vulnerabilities. We further conduct a regression study which shows that monetary incentives have a significantly positive correlation with the number of vulnerabilities reported. Finally, we make recommendations aimed at increasing participation by white hats and organizations in such ecosystems.", "", "", "Measuring software security is dicult and inexact; as a result, the market for secure software has been compared to a ‘market of lemons.’ Schechter has proposed a vulnerability market in which software producers oer a time-variable reward to free-market testers who identify vulnerabilities. This vulnerability market can be used to improve testing and to create a relative metric of product security. This paper argues that such a market can best be considered as an auction; auction theory is then used to tune the structure of this ‘bug auction’ for eciency and to better defend against attacks. The incentives for the software producer are also considered, and some fundamental problems with the concept are articulated." ] }
1708.04903
2745522234
Non-linear, especially convex, objective functions have been extensively studied in recent years in which approaches relies crucially on the convexity property of cost functions. In this paper, we present primal-dual approaches based on configuration linear programs to design competitive online algorithms for problems with arbitrarily-grown objective. This approach is particularly appropriate for non-linear (non-convex) objectives in online setting. We first present a simple greedy algorithm for a general cost-minimization problem. The competitive ratio of the algorithm is characterized by the mean of a notion, called smoothness, which is inspired by a similar concept in the context of algorithmic game theory. The algorithm gives optimal (up to a constant factor) competitive ratios while applying to different contexts such as network routing, vector scheduling, energy-efficient scheduling and non-convex facility location. Next, we consider the online @math covering problems with non-convex objective. Building upon the resilient ideas from the primal-dual framework with configuration LPs, we derive a competitive algorithm for these problems. Our result generalizes the online primal-dual algorithm developed recently by for convex objectives with monotone gradients to non-convex objectives. The competitive ratio is now characterized by a new concept, called local smoothness --- a notion inspired by the smoothness. Our algorithm yields tight competitive ratio for the objectives such as the sum of @math -norms and gives competitive solutions for online problems of submodular minimization and some natural non-convex minimization under covering constraints.
In this paper, we systematically strengthen natural LPs by the construction of the configuration LPs presented in @cite_15 . propose a scheme that consists of solving the new LPs (with exponential number of variables) and rounding the fractional solutions to integer ones using decoupling inequalities. By this method, they derive approximation algorithms for several (offline) optimization problems which can formulated by linear constraints and objective function as a power of some constant @math . Specifically, the approximation ratio is proved to be the Bell number @math for several problems. In our approach, a crucial element to characterize the performance of an algorithm is the smoothness property of functions. The smooth argument is introduced by in the context of algorithmic game theory and it has successfully characterized the performance of equilibria (price of anarchy) in many classes of games such as congestion games, etc @cite_12 . This notion inspires the definition of smoothness in our paper.
{ "cite_N": [ "@cite_15", "@cite_12" ], "mid": [ "2009636784", "2294025081" ], "abstract": [ "We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as xq, q age; 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is Aq, where Aq is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that ||a#x03A3;n i=1 Xi||q ale; Cq ||a#x03A3;n i=1 Yi||q, where Xi are independent nonnegative random variables, Yi are possibly dependent nonnegative random variable, and each Yi has the same distribution as Xi. The inequality was proved by de la Pea#x00F1;a in 1990. However, the optimal constant Cq was not known. We show that the optimal constant is Cq = Aq1 q.", "The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
Besides above papers on constructing convolution layer on graphs, many others studied the problem from a different angle. @cite_31 first investigated learning a network from a set of heterogeneous graphs to predict node-level feature as well as to do graph completion, although it is based on node sequence selection. @cite_25 introduced a graph diffusion process, which delivers equivalent effect as convolution has, but @cite_25 's DCNN has no dependency on the indexing of nodes. Its constrains are the highly restricted locality by diffusion process and the expensive dense matrix multiplication.
{ "cite_N": [ "@cite_31", "@cite_25" ], "mid": [ "2406128552", "2963984147" ], "abstract": [ "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
Recently, @cite_19 investigated a similar problem as ours by learning edge-conditioned feature weight matrix from edge features using a separate filter-generating network @cite_22 , while @cite_19 's application is on point cloud classification. There are other studies about learning on graph data such as @cite_5 that proposed a kernel embedding methods on feature space for graph-structured data. Another similar work is @cite_17 , but their models do not fall into the kingdom of feed-forward CNN analogs on graphs.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_22", "@cite_17" ], "mid": [ "2949455170", "2950191616", "", "2366141641" ], "abstract": [ "A number of problems can be formulated as prediction on graph-structured data. In this work, we generalize the convolution operator from regular grids to arbitrary graphs while avoiding the spectral domain, which allows us to handle graphs of varying size and connectivity. To move beyond a simple diffusion, filter weights are conditioned on the specific edge labels in the neighborhood of a vertex. Together with the proper choice of graph coarsening, we explore constructing deep neural networks for graph classification. In particular, we demonstrate the generality of our formulation in point cloud classification, where we set the new state of the art, and on a graph classification dataset, where we outperform other deep learning approaches. The source code is available at this https URL", "Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are @math times smaller, while at the same time achieving the state-of-the-art predictive performance.", "", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
For chemical compounds, naturally modeled as graphs, @cite_6 @cite_10 @cite_7 made several successful trials of applying neural networks to learn representations for predictive tasks, which were usually tackled by handcrafted features @cite_28 or hashing @cite_26 . Whereas, due to the constraints of spatial convolution, their models failed to make full use of the atom-connectivities, which are more than bond features by @cite_16 . More recent explorations on progressive network, multi-task learning and low-shot or one-shot learning have been accomplished @cite_1 @cite_18 . Lastly, Deepchem [1] https: github.com deepchem deepchem is an outstanding open-source chem-informatics machine learning benchmark. Our codes and demos were built and tested upon it.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_28", "@cite_1", "@cite_6", "@cite_16", "@cite_10" ], "mid": [ "2604306554", "", "2949858440", "2189911347", "2950774882", "", "", "2214665483" ], "abstract": [ "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.", "", "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "The Tox21 Data Challenge has been the largest effort of the scientific community to compare computational methods for toxicity prediction. This challenge comprised 12,000 environmental chemicals and drugs which were measured for 12 different toxic effects by specifically designed assays. We participated in this challenge to assess the performance of Deep Learning in computational toxicity prediction. Deep Learning has already revolutionized image processing, speech recognition, and language understanding but has not yet been applied to computational toxicity. Deep Learning is founded on novel algorithms and architectures for artificial neural networks together with the recent availability of very fast computers and massive datasets. It discovers multiple levels of distributed representations of the input, with higher levels representing more abstract concepts. We hypothesized that the construction of a hierarchy of chemical features gives Deep Learning the edge over other toxicity prediction methods. Furthermore, Deep Learning naturally enables multi-task learning, that is, learning of all toxic effects in one neural network and thereby learning of highly informative chemical features. In order to utilize Deep Learning for toxicity prediction, we have developed the DeepTox pipeline. First, DeepTox normalizes the chemical representations of the compounds. Then it computes a large number of chemical descriptors that are used as input to machine learning methods. In its next step, DeepTox trains models, evaluates them, and combines the best of them to ensembles. Finally, DeepTox predicts the toxicity of new compounds. In the Tox21 Data Challenge, DeepTox had the highest performance of all computational methods winning the grand challenge, the nuclear receptor panel, the stress response panel, and six single assays (teams Bioinf@JKU''). We found that Deep Learning excelled in toxicity prediction and outperformed many other computational approaches like naive Bayes, support vector machines, and random forests.", "Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds. However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the residual LSTM embedding, that, when combined with graph convolutional neural networks, significantly improves the ability to learn meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery.", "", "", "Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best predictive performance in areas such as speech and image recognition by hierarchically composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architecture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug discovery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target information successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8 of the targets in the DUDE benchmark." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
The computational complexity of coalitional manipulation problems was studied extensively. For general scoring rules @math , most earlier work considered the case where the number of candidates is bounded: Conitzer at al. @cite_1 show that when @math is bounded, @math -UCM is solvable in polynomial time.
{ "cite_N": [ "@cite_1" ], "mid": [ "1972375916" ], "abstract": [ "In multiagent settings where the agents have different preferences, preference aggregation is a central issue. Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable. One could try to avoid manipulation by using protocols where determining a beneficial manipulation is hard. Especially among computational agents, it is reasonable to measure this hardness by computational complexity. Some earlier work has been done in this area, but it was assumed that the number of voters and candidates is unbounded. Such hardness results lose relevance when the number of candidates is small, because manipulation algorithms that are exponential only in the number of candidates (and only slightly so) might be available. We give such an algorithm for an individual agent to manipulate the Single Transferable Vote (STV) protocol, which has been shown hard to manipulate in the above sense. This motivates the core of this article, which derives hardness results for realistic elections where the number of candidates is a small constant (but the number of voters can be large). The main manipulation question we study is that of coalitional manipulation by weighted voters. (We show that for simpler manipulation problems, manipulation cannot be hard with few candidates.) We study both constructive manipulation (making a given candidate win) and destructive manipulation (making a given candidate not win). We characterize the exact number of candidates for which manipulation becomes hard for the plurality, Borda, STV, Copeland, maximin, veto, plurality with runoff, regular cup, and randomized cup protocols. We also show that hardness of manipulation in this setting implies hardness of manipulation by an individual in unweighted settings when there is uncertainty about the others' votes (but not vice-versa). To our knowledge, these are the first results on the hardness of manipulation when there is uncertainty about the others' votes." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
In the weighted case, the situation is different: for all positional scoring rules @math , except plurality-like rules, @math -WCM is @math -hard when @math @cite_1 @cite_19 @cite_3 . In particular, this holds for Borda-WCM. However, the computational hardness of Borda-UCM still remained open for quite some time, until finally shown to be @math -hard as well @cite_0 @cite_23 in 2011, even for the case of @math and adding @math manipulators.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23" ], "mid": [ "1972375916", "1493942848", "", "1977012944", "104706817" ], "abstract": [ "In multiagent settings where the agents have different preferences, preference aggregation is a central issue. Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable. One could try to avoid manipulation by using protocols where determining a beneficial manipulation is hard. Especially among computational agents, it is reasonable to measure this hardness by computational complexity. Some earlier work has been done in this area, but it was assumed that the number of voters and candidates is unbounded. Such hardness results lose relevance when the number of candidates is small, because manipulation algorithms that are exponential only in the number of candidates (and only slightly so) might be available. We give such an algorithm for an individual agent to manipulate the Single Transferable Vote (STV) protocol, which has been shown hard to manipulate in the above sense. This motivates the core of this article, which derives hardness results for realistic elections where the number of candidates is a small constant (but the number of voters can be large). The main manipulation question we study is that of coalitional manipulation by weighted voters. (We show that for simpler manipulation problems, manipulation cannot be hard with few candidates.) We study both constructive manipulation (making a given candidate win) and destructive manipulation (making a given candidate not win). We characterize the exact number of candidates for which manipulation becomes hard for the plurality, Borda, STV, Copeland, maximin, veto, plurality with runoff, regular cup, and randomized cup protocols. We also show that hardness of manipulation in this setting implies hardness of manipulation by an individual in unweighted settings when there is uncertainty about the others' votes (but not vice-versa). To our knowledge, these are the first results on the hardness of manipulation when there is uncertainty about the others' votes.", "Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.", "", "Scoring protocols are a broad class of voting systems. Each is defined by a vector (@a\"1,@a\"2,...,@a\"m), @a\"1>[email protected]\"2>=...>[email protected]\"m, of integers such that each voter contributes @a\"1 points to his her first choice, @a\"2 points to his her second choice, and so on, and any candidate receiving the most points is a winner. What is it about scoring-protocol election systems that makes some have the desirable property of being NP-complete to manipulate, while others can be manipulated in polynomial time? We find the complete, dichotomizing answer: Diversity of dislike. Every scoring-protocol election system having two or more point values assigned to candidates other than the favorite-i.e., having @? @a\"i|2==2-is NP-complete to manipulate. Every other scoring-protocol election system can be manipulated in polynomial time. In effect, we show that-other than trivial systems (where all candidates alway tie), plurality voting, and plurality voting's transparently disguised translations-every scoring-protocol election system is NP-complete to manipulate.", "The Borda voting rule is a positional scoring rule where, for m candidates, for every vote the first candidate receives m- 1 points, the second m- 2 points and so on. A Borda winner is a candidate with highest total score. It has been a prominent open problem to determine the computational complexity of UNWEIGHTED COALITIONAL MANIPULATION UNDER BORDA: Can one add a certain number of additional votes (called manipulators) to an election such that a distinguished candidate becomes a winner? We settle this open problem by showing NP-hardness even for two manipulators and three input votes. Moreover, we discuss extensions and limitations of this hardness result." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
@cite_10 present two additional heuristics: iteratively, assign the largest un-allocated score to the candidate with the largest gap (), or to the candidate with the largest ratio of gap divided by the number of scores yet-to-be-allocated to this candidate (). To the best of our knowledge, these algorithms do not have a counterpart for the weighted case.
{ "cite_N": [ "@cite_10" ], "mid": [ "2052020474" ], "abstract": [ "We investigate manipulation of the Borda voting rule, as well as two elimination style voting rules, Nanson's and Baldwin's voting rules, which are based on Borda voting. We argue that these rules have a number of desirable computational properties. For unweighted Borda voting, we prove that it is NP-hard for a coalition of two manipulators to compute a manipulation. This resolves a long-standing open problem in the computational complexity of manipulating common voting rules. We prove that manipulation of Baldwin's and Nanson's rules is computationally more difficult than manipulation of Borda, as it is NP-hard for a single manipulator to compute a manipulation. In addition, for Baldwin's and Nanson's rules with weighted votes, we prove that it is NP-hard for a coalition of manipulators to compute a manipulation with a small number of candidates.Because of these NP-hardness results, we compute manipulations using heuristic algorithms that attempt to minimise the number of manipulators. We propose several new heuristic methods. Experiments show that these methods significantly outperform the previously best known heuristic method for the Borda rule. Our results suggest that, whilst computing a manipulation of the Borda rule is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice. In contrast to the Borda rule, our experiments with Baldwin's and Nanson's rules demonstrate that both of them are often more difficult to manipulate in practice. These results suggest that elimination style voting rules deserve further study." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
As discussed, configuration linear programs were also used in scheduling literature, for example for the following two problems which were extensively studied before: In the so-called @cite_11 , Santa Claus has @math presents that he wishes to distribute between @math kids, and @math is the value that kid @math has to present @math . The goal is to the happiness of the least happy kid: @math , where @math is the presents allocated to kid @math . In the problem of @cite_15 . We need to assign @math jobs between @math machines, and @math is the time required for machine @math to execute job @math . The goal is to the makespan @math , where @math is the jobs assigned to machine @math . Both papers researched a natural and well-researched restricted assignment' variant of the two problems where @math . @cite_11 , they obtained an @math -multiplicative approximation to the first problem and in @cite_15 , they obtained a @math -multiplicative approximation to the second.
{ "cite_N": [ "@cite_15", "@cite_11" ], "mid": [ "2135932424", "1978593916" ], "abstract": [ "One of the classic results in scheduling theory is the @math -approximation algorithm by Lenstra, Shmoys, and Tardos for the problem of scheduling jobs to minimize makespan on unrelated machines; i.e., job @math requires time @math if processed on machine @math . More than two decades after its introduction it is still the algorithm of choice even in the restricted model where processing times are of the form @math . This problem, also known as the restricted assignment problem, is NP-hard to approximate within a factor less than @math , which is also the best known lower bound for the general version. Our main result is a polynomial time algorithm that estimates the optimal makespan of the restricted assignment problem within a factor @math , where @math is an arbitrarily small constant. The result is obtained by upper bounding the integrality gap of a certain strong linear program, known as the configuration LP, that was previously successfu...", "We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj ∈ Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m log log log m) approximation algorithm for the restricted assignment case of the problem when pij ∈ pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of Ω(m1 2) in the general case, when pij can be arbitrary." ] }