aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
cs0603115
1539159366
The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results.
Others libraries represent multiprecision numbers as the unevaluated sum of several double-precision FP numbers such as Briggs' double-double @cite_18 , Bailey's quad-doubles @cite_5 and Daumas' floating-point expansions @cite_0 . This representation format is based on the IEEE-754 features that lead to simple algorithms for arithmetic operators. However this format is confined to low precision (2 to 3 floating-point number) as the complexity of algorithms increases quadratically with the precision.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_18" ], "mid": [ "2163593765", "2104380208", "" ], "abstract": [ "In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision application will be dramatically enhanced by adequate use of floating point expansions. We present three multiplication algorithms, faster and more integrated than the stepwise algorithm proposed earlier. We have tested these novel algorithms on an application that computes the determinant of a matrix. In the absence of overflow or underflow, the process is error free and possibly more efficient than its integer based counterpart.", "A quad-double number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quad-double numbers. The performance of the algorithms, implemented in C++, is also presented.", "" ] }
cs0603115
1539159366
The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results.
For example, Strzodka @cite_14 proposed a 16-bit fixed-point representation and operation out of the 8-bit fixed-point format. In his work, two 8-bit numbers were used to emulate 16-bit. The author claimed that operators in his representation format were only $50
{ "cite_N": [ "@cite_14" ], "mid": [ "69766967" ], "abstract": [ "There is a growing demand for high precision texture formats fed by the increasing number of textures per pixel and multi-pass algorithms in dynamic texturing and visualization. Therefore support for wider data formats in graphics hardware is evolving. The existing functionality of current graphics cards, however, can already be used to provide higher precision textures. This paper shows how to emulate a 16 bit precise signed format by use of RGBA8 textures and existing shader and register operations. Thereby a 16 bit number is stored in two unsigned 8 bit color channels. The focus lies on a 16 bit signed number format which generalizes existing 8 bit formats allowing lossless format expansions, and which has an exact representation of 1, 0 and allowing stable long-lasting dynamic texture updates. Implementations of basic arithmetic operations and dependent texture loop-ups in this format are presented and example algorithms dealing with 16 bit precise dynamic updates of displacement maps, normal textures and filters demonstrate some of the resulting application areas." ] }
quant-ph0601097
1545991018
In this note we consider optimised circuits for implementing Shor's quantum factoring algorithm. First I give a circuit for which none of the about 2n qubits need to be initialised (though we still have to make the usual 2n measurements later on). Then I show how the modular additions in the algorithm can be carried out with a superposition of an arithmetic sequence. This makes parallelisation of Shor's algorithm easier. Finally I show how one can factor with only about 1.5n qubits, and maybe even fewer.
Also I understand that John Watrous @cite_4 has been using uniform superpositions of subgroups (and cosets) in his work on quantum algorithms for solvable groups. Thus he also used coset superpositions to represent elements of the factor group (and probably also to carry out factor group operations on them). In our case the overall group are the integers, the (normal) subgroup are the multiples of @math . The factor group who's elements we want to represent is @math . We now represent these elements by superpositions over the cosets of the form @math . A problem in our case is that we can do things only approximatively as the integers and the cosets are infinite sets.
{ "cite_N": [ "@cite_4" ], "mid": [ "1967088292" ], "abstract": [ "In this paper we give a polynomial-time quantum algorithm for computing orders of solvable groups. Several other problems, such as testing membership in solvable groups, testing equality of subgroups in a given solvable group, and testing normality of a subgroup in a given solvable group, reduce to computing orders of solvable groups and therefore admit polynomial-time quantum algorithms as well. Our algorithm works in the setting of black-box groups, wherein none of these problems have polynomial-time classical algorithms. As an important byproduct, our algorithm is able to produce a pure quantum state that is uniform over the elements in any chosen subgroup of a solvable group, which yields a natural way to apply existing quantum algorithms to factor groups of solvable groups." ] }
cs0601044
2950161596
Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced.
Some GP learning applications @cite_11 @cite_2 @cite_18 have made use of a three data sets methodology, but without making a thorough analysis of its effects. Panait and Luke @cite_25 conducted some experiments on different approaches to increase the robustness of the solutions generated by GP, using a three data sets methodology to evaluate the efficiency of each approach. Rowland @cite_21 and Kushchu @cite_4 conducted studies on generalization in EC and GP. Both of their argumentations converge toward the testing of solutions in previously unseen situations for improving robustness.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_2", "@cite_25", "@cite_11" ], "mid": [ "", "2027433115", "2133454221", "1593492645", "1828504193", "" ], "abstract": [ "", "In genetic programming (GP), learning problems can be classified broadly into two types: those using data sets, as in supervised learning, and those using an environment as a source of feedback. An increasing amount of research has concentrated on the robustness or generalization ability of the programs evolved using GP. While some of the researchers report on the brittleness of the solutions evolved, others proposed methods of promoting robustness generalization. It is important that these methods are not ad hoc and are applicable to other experimental setups. In this paper, learning concepts from traditional machine learning and a brief review of research on generalization in GP are presented. The paper also identifies problems with brittleness of solutions produced by GP and suggests a method for promoting robustness generalization of the solutions in simulating learning behaviors using GP.", "EC-based supervised learning has been demonstrated to be an effective approach to forming predictive models in genomics, spectral interpretation, and other problems in modern biology. Longer-established methods such as PLS and ANN are also often successful. In supervised learning, overtraining is always a potential problem. The literature reports numerous methods of validating predictive models in order to avoid overtraining. Some of these approaches can be applied to EC-based methods of supervised learning, though the characteristics of EC learning are different from those obtained with PLS and ANN and selecting a suitably general model can be more difficult. This paper reviews the issues and various approaches, illustrating salient points with examples taken from applications in bioinformatics.", "This paper applies the evolution of GP teams to different classification and regression problems and compares different methods for combining the outputs of the team programs. These include hybrid approaches where (1) a neural network is used to optimize the weights of programs in a team for a common decision and (2) a realnumbered vector (the representation of evolution strategies) of weights is evolved with each term in parallel. The cooperative team approach results in an improved training and generalization performance compared to the standard GP method. The higher computational overhead of team evolution is counteracted by using a fast variant of linear GP.", "Many evolutionary computation search spaces require fitness assessment through the sampling of and generalization over a large set of possible cases as input. Such spaces seem particularly apropos to Genetic Programming, which notionally searches for computer algorithms and functions. Most existing research in this area uses ad-hoc approaches to the sampling task, guided more by intuition than understanding. In this initial investigation, we compare six approaches to sampling large training case sets in the context of genetic programming representations. These approaches include fixed and random samples, and adaptive methods such as coevolution or fitness sharing. Our results suggest that certain domain features may lead to the preference of one approach to generalization over others. In particular, coevolution methods are strongly domain-dependent. We conclude the paper with suggestions for further investigations to shed more light onto how one might adjust fitness assessment to make various methods more effective.", "" ] }
cs0601044
2950161596
Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced.
Because of the bloat phenomenon, typical in GP, parsimony pressure has been more widely studied @cite_19 @cite_12 @cite_15 @cite_23 . In particular, several papers @cite_5 @cite_22 @cite_1 have produced interesting results around the idea of using a parsimony pressure to increase the generalization capability of GP-evolved solutions. However, a counter-argumentation is given in @cite_13 , where solutions biased toward low complexity have, in some circumstances, increased generalization error. This is in accordance with the argumentation given in @cite_17 , which states that less complex solutions are not always more robust.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_19", "@cite_23", "@cite_5", "@cite_15", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2155582960", "2139470281", "1591505907", "", "1592596255", "83216595", "1951785234", "1500215390", "1761022139" ], "abstract": [ "Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this article we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks.", "Genetic Programming (GP) uses variable size representations as programs. Size becomes an important and interesting emergent property of the structures evolved by GP. The size of programs can be both a controlling and a controlled factor in GP search. Size influences the efficiency of the search process and is related to the generality of solutions. This paper analyzes the size and generality issues in standard GP and GP using subroutines and addresses the question whether such an analysis can help control the search process. We relate the size, generalization and modularity issues for programs evolved to control an agent in a dynamic and non-deterministic environment, as exemplified by the Pac-Man game.", "The rapid growth of program code is an important problem in genetic programming systems. In the present paper we investigate a selection scheme based on multiobjective optimization. Since we want to obtain accurate and small solutions, we reformulate this problem as multiobjective optimization. We show that selection based on the Pareto nondomination criterion reduces code growth and processing time without significant loss of solution accuracy.", "", "", "", "A common data mining heuristic is, \"when choosing between models with the same training error, less complex models should be preferred as they perform better on unseen data\". This heuristic may not always hold. In genetic programming a preference for less complex models is implemented as: (i) placing a limit on the size of the evolved program; (ii) penalizing more complex individuals, or both. The paper presents a GP-variant with no limit on the complexity of the evolved program that generates highly accurate models on a common dataset.", "", "Many KDD systems incorporate an implicit or explicit preference for simpler models, but this use of “Occam‘s razor” has been strongly criticized by several authors (e.g., Schaffer, 1993s Webb, 1996). This controversy arises partly because Occam‘s razor has been interpreted in two quite different ways. The first interpretation (simplicity is a goal in itself) is essentially correct, but is at heart a preference for more comprehensible models. The second interpretation (simplicity leads to greater accuracy) is much more problematic. A critical review of the theoretical arguments for and against it shows that it is unfounded as a universal principle, and demonstrably false. A review of empirical evidence shows that it also fails as a practical heuristic. This article argues that its continued use in KDD risks causing significant opportunities to be missed, and should therefore be restricted to the comparatively few applications where it is appropriate. The article proposes and reviews the use of domain constraints as an alternative for avoiding overfitting, and examines possible methods for handling the accuracy–comprehensibility trade-off." ] }
cs0601051
1489954224
This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP).
This notion of unfolding derives from the work on unfolding of intensional sets @cite_8 , and has been independently described in @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_8" ], "mid": [ "80590317", "2027993053" ], "abstract": [ "Dept. of Computer Science, K.U.LeuvenCelestijnenlaan 200A, B-3001 Heverlee, BelgiumE-mail: pelov,marcd,maurice @cs.kuleuven.ac.beAbstract. We define a translation of aggregate programs to normal lo-gic programs which preserves the set of partial stable models. We thendefine the classes of definite and stratified aggregate programs and showthat the translation of such programs are, respectively, definite and strat-ified logic programs. Consequently these two classes of programs havea single partial stable model which is two-valued and is also the well-founded model. Our definition of stratification is more general than theexisting one and covers a strictly larger class of programs.", "The aim of this paper is to extend theConstructive Negation technique to the case ofCLP(SeT), a Constraint Logic Programming (CLP) language based on hereditarily (and hybrid) finite sets. The challenging aspects of the problem originate from the fact that the structure on whichCLP(SeT) is based is notadmissible closed, and this does not allow to reuse the results presented in the literature concerning the relationships betweenCLP and constructive negation." ] }
cs0601051
1489954224
This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP).
The work of @cite_5 @cite_2 @cite_11 contains an elegant generalization of several semantics of logic programs to logic programs with aggregates. The key idea in this work is the use of approximation theory in defining several semantics for logic programs with aggregates (e.g., two-valued semantics, ultimate three-valued stable semantics, three-valued stable model semantics). In particular, in @cite_11 , the authors describe a fixpoint operator, called @math , operating on 3-valued interpretations and parameterized by the choice of approximating aggregates.
{ "cite_N": [ "@cite_5", "@cite_11", "@cite_2" ], "mid": [ "80590317", "1520574003", "125332145" ], "abstract": [ "Dept. of Computer Science, K.U.LeuvenCelestijnenlaan 200A, B-3001 Heverlee, BelgiumE-mail: pelov,marcd,maurice @cs.kuleuven.ac.beAbstract. We define a translation of aggregate programs to normal lo-gic programs which preserves the set of partial stable models. We thendefine the classes of definite and stratified aggregate programs and showthat the translation of such programs are, respectively, definite and strat-ified logic programs. Consequently these two classes of programs havea single partial stable model which is two-valued and is also the well-founded model. Our definition of stratification is more general than theexisting one and covers a strictly larger class of programs.", "We introduce a family of partial stable model semantics for logic programs with arbitrary aggregate relations. The semantics are parametrized by the interpretation of aggregate relations in three-valued logic. Any semantics in this family satisfies two important properties: (i) it extends the partial stable semantics for normal logic programs and (ii) total stable models are always minimal. We also give a specific instance of the semantics and show that it has several attractive features.", "" ] }
cs0601051
1489954224
This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP).
For the sake of completeness, we will review the translation of @cite_5 , presented using the notation of our paper. Given a ground logic program with aggregates @math , @math denotes the ground normal logic program obtained after the translation. The process begins with the translation of each aggregate atom @math of the form @math into a disjunction @math , where @math , and each @math is a conjunction of the form [ l s_1 l l H ( ) s_2 l ] The construction of @math considers only the pairs @math that satisfy the following condition: each interpretation @math such that @math and @math must satisfy @math . The translation @math is then created by replacing rules with disjunction in the body by a set of standard rules in a straightforward way. For example, the rule [ a (b c), d ] is replaced by the two rules [ ] From the definitions of @math and of aggregate solutions, we have the following simple lemma: We next show that fixed point answer sets of @math are answer sets of @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "80590317" ], "abstract": [ "Dept. of Computer Science, K.U.LeuvenCelestijnenlaan 200A, B-3001 Heverlee, BelgiumE-mail: pelov,marcd,maurice @cs.kuleuven.ac.beAbstract. We define a translation of aggregate programs to normal lo-gic programs which preserves the set of partial stable models. We thendefine the classes of definite and stratified aggregate programs and showthat the translation of such programs are, respectively, definite and strat-ified logic programs. Consequently these two classes of programs havea single partial stable model which is two-valued and is also the well-founded model. Our definition of stratification is more general than theexisting one and covers a strictly larger class of programs." ] }
cs0601051
1489954224
This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP).
In @cite_5 , it is shown that answer sets of @math coincide with the of @math (defined by the operator @math ). This, together with the above lemma and Theorem , allows us to conclude the following theorem.
{ "cite_N": [ "@cite_5" ], "mid": [ "80590317" ], "abstract": [ "Dept. of Computer Science, K.U.LeuvenCelestijnenlaan 200A, B-3001 Heverlee, BelgiumE-mail: pelov,marcd,maurice @cs.kuleuven.ac.beAbstract. We define a translation of aggregate programs to normal lo-gic programs which preserves the set of partial stable models. We thendefine the classes of definite and stratified aggregate programs and showthat the translation of such programs are, respectively, definite and strat-ified logic programs. Consequently these two classes of programs havea single partial stable model which is two-valued and is also the well-founded model. Our definition of stratification is more general than theexisting one and covers a strictly larger class of programs." ] }
cs0601068
1628238937
In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs.
Static compile-time analysis with programmer written compiler-extensions was used to catch around 500 bugs in the linux kernel @cite_6 , @cite_3 . Using static data flow analysis and domain specific knowledge, many bugs were found in the heavily audited kernel. Ways have also been suggested to automatically detect anomalies as deviant behavior in the source code @cite_2 . Most of the bugs checked by static analysis are local to a single file, sometimes even local to a single procedure. This is due to the complexity involved in performing global compile time analysis. This limits the power of static analysis tools to surface bugs . Our approach, on the other hand, can track data flow across many different software components possibly written by different vendors and can thus target a different variety of errors. However, static analysis has the huge advantage of being able to check all possible code paths, while our execution-driven approach can only check bugs along the path of execution in the system.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_2" ], "mid": [ "1600965014", "2066859698", "2043811931" ], "abstract": [ "This paper shows how system-specific static analysis can find security errors that violate rules such as \"integers from untrusted sources must be sanitized before use\" and \"do not dereference user-supplied pointers.\" In our approach, programmers write system-specific extensions that are linked into the compiler and check their code for errors. We demonstrate the approach's effectiveness by using it to find over 100 security errors in Linux and OpenBSD, over 50 of which have led to kernel patches. An unusual feature of our approach is the use of methods to automatically detect when we miss code actions that should be checked.", "Systems software such as OS kernels, embedded systems, and libraries must obey many rules for both correctness and performance. Common examples include \"accesses to variable A must be guarded by lock B,\" \"system calls must check user pointers for validity before using them,\" and \"message handlers should free their buffers as quickly as possible to allow greater parallelism.\" Unfortunately, adherence to these rules is largely unchecked. This paper attacks this problem by showing how system implementors can use meta-level compilation (MC) to write simple, system-specific compiler extensions that automatically check their code for rule violations. By melding domain-specific knowledge with the automatic machinery of compilers, MC brings the benefits of language-level checking and optimizing to the higher, \"meta\" level of the systems implemented in these languages. This paper demonstrates the effectiveness of the MC approach by applying it to four complex, real systems: Linux, OpenBSD, the Xok exokernel, and the FLASH machine's embedded software. MC extensions found roughly 500 errors in these systems and led to numerous kernel patches. Most extensions were less than a hundred lines of code and written by implementors who had a limited understanding of the systems checked.", "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules.The cornerstone of our approach is inferring programmer \"beliefs\" that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \"unlock(1)\" implies that 1 was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \"spin_lock\" followed once by a call to \"spin_unlock\" implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is.Conceptually, our checkers extract beliefs by tailoring rule \"templates\" to a system --- for example, finding all functions that fit the rule template \"a must be paired with b.\" We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually." ] }
cs0601068
1628238937
In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs.
Recently, model checking was used to find serious file system errors @cite_8 . Using an abstract model and intelligent reduction of the state space, they could check for errors which would have required an exponential number of search paths through traditional testing. Model checking can check for deeper semantic bugs than possible with static compile-time analysis. We intend to use similar ideas to model check entire system images, thus allowing us to search a larger number of execution paths while performing our shadow machine analysis. One of the obstacles in this direction is the slow speed of machine simulation that makes execution of speculative paths almost infeasible.
{ "cite_N": [ "@cite_8" ], "mid": [ "2124877509" ], "abstract": [ "This article shows how to use model checking to find serious errors in file systems. Model checking is a formal verification technique tuned for finding corner-case errors by comprehensively exploring the state spaces defined by a system. File systems have two dynamics that make them attractive for such an approach. First, their errors are some of the most serious, since they can destroy persistent data and lead to unrecoverable corruption. Second, traditional testing needs an impractical, exponential number of test cases to check that the system will recover if it crashes at any point during execution. Model checking employs a variety of state-reducing techniques that allow it to explore such vast state spaces efficiently.We built a system, FiSC, for model checking file systems. We applied it to four widely-used, heavily-tested file systems: ext3, JFS, ReiserFS and XFS. We found serious bugs in all of them, 33 in total. Most have led to patches within a day of diagnosis. For each file system, FiSC found demonstrable events leading to the unrecoverable destruction of metadata and entire directories, including the file system root directory “ ”." ] }
cs0601068
1628238937
In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs.
Shadow machine simulation has been previously used to perform taint analysis to determine the data lifetime of sensitive data @cite_0 . This work reported a startling observation that sensitive data like passwords and credit card numbers may reside in computer's memory and disk long after the user has logged out. Such leaks occur at caches, I O buffers, kernel queues, and other places which are not under the control of the application developer. Our work uses a similar taint analysis by marking all bytes received over the network as untrusted and checking if they are used in unwanted ways (eg. formatstring).
{ "cite_N": [ "@cite_0" ], "mid": [ "1499241274" ], "abstract": [ "Strictly limiting the lifetime (i.e. propagation and duration of exposure) of sensitive data (e.g. passwords) is an important and well accepted practice in secure software development. Unfortunately, there are no current methods available for easily analyzing data lifetime, and very little information available on the quality of today's software with respect to data lifetime. We describe a system we have developed for analyzing sensitive data lifetime through whole system simulation called TaintBochs. TaintBochs tracks sensitive data by \"tainting\" it at the hardware level. Tainting information is then propagated across operating system, language, and application boundaries, permitting analysis of sensitive data handling at a whole system level. We have used TaintBochs to analyze sensitive data handling in several large, real world applications. Among these were Mozilla, Apache, and Perl, which are used to process millions of passwords, credit card numbers, etc. on a daily basis. Our investigation reveals that these applications and the components they rely upon take virtually no measures to limit the lifetime of sensitive data they handle, leaving passwords and other sensitive data scattered throughout user and kernel memory. We show how a few simple and practical changes can greatly reduce sensitive data lifetime in these applications." ] }
cs0601068
1628238937
In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs.
Recently, @cite_11 used taint-analysis on untrusted data to check for security violations such as buffer overflows and formatstring attacks in applications. By implementing a valgrind skin, they were able to restrict the overhead of their taint-analysis tool to 10-25x. Considering that the computation power is relatively cheap, they suggest using their tool in production runs of the software. This will detect and prevent any online attacks on the system.
{ "cite_N": [ "@cite_11" ], "mid": [ "2102970979" ], "abstract": [ "Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [26, 43]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we describe how TaintCheck could improve automatic signature generation in" ] }
cs0601073
1535620672
In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks.
The distribution of distances between source and destination nodes has been calculated before @cite_8 @cite_6 . Both cited approaches are dependent on a two-dimensional geometry which is justifiable up to some extent. In this work we opt for a three-dimensional formulation of the problem in order not to restrict the topology analyzed. But we are aware that the dimensionality of the routing problem in Wireless Ad-Hoc Networks is not a well defined problem.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2161981474", "2156192134" ], "abstract": [ "In this paper we study the lengths of the routes in ad hoc networks. We propose a simplified theoretical model having as objective to estimate the path length for the routing protocols that are using flooding during their path discovery phase. We show how to evaluate the average gain in the hop number that one can obtain by using a simple reduction strategy. We prove the gain to be linear under very general conditions and show how it can be computed practically.", "The probability distribution is found for the link distance between two randomly positioned mobile radios in a wireless network for two representative deployment scenarios: (1) the mobile locations are uniformly distributed over a rectangular area and (2) the x and y coordinates of the mobile locations have Gaussian distributions. It is shown that the shapes of the link distance distributions for these scenarios are very similar when the width of the rectangular area in the first scenario is taken to be about three times the standard deviation of the location distribution in the second scenario. Thus the choice of mobile location distribution is not critical, but can be selected for the convenience of other aspects of the analysis or simulation of the mobile system." ] }
cs0601073
1535620672
In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks.
The analysis of the routing problem dependent on a length scale that characterizes the awareness of the distributed routing protocol of its environment is not original and has been used before in the work by Melodia @cite_0 . The authors of this work introduce a phenomenological quantity called Knowledge Range" represents the physical extent of the routing strategy up to which is capable of finding the shortest path.
{ "cite_N": [ "@cite_0" ], "mid": [ "2126003739" ], "abstract": [ "Since ad hoc and sensor networks can be composed of a very large number of devices, the scalability of network protocols is a major design concern. Furthermore, network protocols must be designed to prolong the battery lifetime of the devices. However, most existing routing techniques for ad hoc networks are known not to scale well. On the other hand, the so-called geographical routing algorithms are known to be scalable but their energy efficiency has never been extensively and comparatively studied. In a geographical routing algorithm, data packets are forwarded by a node to its neighbor based on their respective positions. The neighborhood of each node is constituted by the nodes that lie within a certain radio range. Thus, from the perspective of a node forwarding a packet, the next hop depends on the width of the neighborhood it perceives. The analytical framework proposed in this paper allows to analyze the relationship between the energy efficiency of the routing tasks and the extension of the range of the topology knowledge for each node. A wider topology knowledge may improve the energy efficiency of the routing tasks but increases the cost of topology information due to signaling packets needed to acquire this information. The problem of determining the optimal topology knowledge range for each node to make energy efficient geographical routing decisions is tackled by integer linear programming. It is shown that the problem is intrinsically localized, i.e., a limited topology knowledge is sufficient to make energy efficient forwarding decisions. The leading forwarding rules for geographical routing are compared in this framework, and the energy efficiency of each of them is studied. Moreover, a new forwarding scheme, partial topology knowledge forwarding (PTKF), is introduced, and shown to outperform other existing schemes in typical application scenarios. A probe-based distributed protocol for knowledge range adjustment (PRADA) is finally introduced that allows each node to efficiently select online its topology knowledge range. PRADA is shown to rapidly converge to a near-optimal solution." ] }
cs0601073
1535620672
In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks.
The use of random walks as an effective (or unique) strategy for routing in Large-Scale Ad-Hoc Networks has been suggested in some works @cite_1 @cite_13 . In these works, the common drive to use this strategy is the logical conclusion that effective distributed routing in a large-scale network is unfeasible as it would require solving an NP-complete problem @cite_9 .
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_1" ], "mid": [ "1561598073", "2112169033", "2120361596" ], "abstract": [ "The task of moving data (i.e., the routing problem) in large-scale sensor networks has to contend with several obstacles, including severe power constraints at each node and temporary, but random, failures of nodes, rendering routing schemes designed for traditional communication networks ineffective. We consider the open problem of finding optimum routes between any fixed source-destination pair in a large-scale network, such that the communication load (i.e., the required power) is distributed among all the nodes, the overall latency is minimized, and the algorithm is decentralized and robust. A recent work addressed this problem in the context of a grid topology and showed how to obtain load-balanced routing, but transmissions are restricted to be among near-neighbors and the overall latency grows linearly with the number of nodes. We show how one can route messages between source and destination nodes along random small-world topologies using a decentralized algorithm. Specifically, nodes make connections independently (based only on the source and destination information in the packets), according to a distribution that guarantees an average latency of O(log sup 2 (N)), while preventing hotspot regions by providing an almost uniform distribution of traffic load over all nodes. Surprisingly, the randomized nature of the network structure keeps the average per-node power consumption almost the same as in the case of a grid topology (i.e., local transmissions), while providing an exponential reduction in latency, resulting in a highly fault-tolerant and stable design capable of working in very dynamic environments.", "The upcoming gigabit-per-second high-speed networks are expected to support a wide range of communication-intensive real-time multimedia applications. The requirement for timely delivery of digitized audio-visual information raises new challenges for next-generation integrated services broadband networks. One of the key issues is QoS routing. It selects network routes with sufficient resources for the requested QoS parameters. The goal of routing solutions is twofold: (1) satisfying the QoS requirements for every admitted connection, and (2) achieving global efficiency in resource utilization. Many unicast multicast QoS routing algorithms have been published, and they work with a variety of QoS requirements and resource constraints. Overall, they can be partitioned into three broad classes: (1) source routing, (2) distributed routing, and (3) hierarchical routing algorithms. We give an overview of the QoS routing problem as well as the existing solutions. We present the strengths and weaknesses of different routing strategies, and outline the challenges. We also discuss the basic algorithms in each class, classify and compare them, and point out possible future directions in the QoS routing area.", "We consider a routing problem in the context of large scale networks with uncontrolled dynamics. A case of uncontrolled dynamics that has been studied extensively is that of mobile nodes, as this is typically the case in cellular and mobile ad-hoc networks. In this paper however we study routing in the presence of a different type of dynamics: nodes do not move, but instead switch between active and inactive states at random times. Our interest in this case is motivated by the behavior of sensor nodes powered by renewable sources, such as solar cells or ambient vibrations. In this paper we formalize the corresponding routing problem as a problem of constructing suitably constrained random walks on random dynamic graphs. We argue that these random walks should be designed so that their resulting invariant distribution achieves a certain load balancing property, and we give simple distributed algorithms to compute the local parameters for the random walks that achieve the sought behavior. A truly novel feature of our formulation is that the algorithms we obtain are able to route messages along all possible routes between a source and a destination node, without performing explicit route discovery repair computations, and without maintaining explicit state information about available routes at the nodes. To the best of our knowledge, these are the first algorithms that achieve true multipath routing (in a statistical sense), at the complexity of simple stateless operations." ] }
cs0601089
1912679131
This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.
Distributed learning has been addressed in a variety of other works. Reference @cite_6 considered a PAC-like model for learning with many individually trained hypotheses in a distribution-specific learning framework. Reference @cite_14 considered the classical model for decentralized detection @cite_12 in a nonparametric setting. Reference @cite_0 studied the existence of consistent estimators in several models for distributed learning. From a data mining perspective, @cite_7 and @cite_2 derived algorithms for distributed boosting. Most similar to the research presented here, @cite_1 presented a general framework for distributed linear regression motivated by WSNs.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "2101982583", "", "2165004589", "2063289366", "2061177108", "2169958062", "1525038591" ], "abstract": [ "We consider the problem of decentralized detection under constraints on the number of bits that can be transmitted by each sensor. In contrast to most previous work, in which the joint distribution of sensor observations is assumed to be known, we address the problem when only a set of empirical samples is available. We propose a novel algorithm using the framework of empirical risk minimization and marginalized kernels, and analyze its computational and statistical properties both theoretically and empirically. We provide an efficient implementation of the algorithm, and demonstrate its performance on both simulated and real data sets.", "", "We present distributed regression, an efficient and general framework for in-network modeling of sensor data. In this framework, the nodes of the sensor network collaborate to optimally fit a global function to each of their local measurements. The algorithm is based upon kernel linear regression, where the model takes the form of a weighted sum of local basis functions; this provides an expressive yet tractable class of models for sensor network data. Rather than transmitting data to one another or outside the network, nodes communicate constraints on the model parameters, drastically reducing the communication required. After the algorithm is run, each node can answer queries for its local region, or the nodes can efficiently transmit the parameters of the model to a user outside the network. We present an evaluation of the algorithm based upon data from a 48-node sensor network deployment at the Intel Research - Berkeley Lab, demonstrating that our distributed algorithm converges to the optimal solution at a fast rate and is very robust to packet losses.", "We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.", "Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process among members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble; the answers to this question present new issues to consider with regard to universal consistency. This paper addresses the issue of whether or not the guarantees provided by Stone's theorem in centralized environments hold in distributed settings.", "In this paper, we propose a general framework for distributed boosting intended for efficient integrating specialized classifiers learned over very large and distributed homogeneous databases that cannot be merged at a single location. Our distributed boosting algorithm can also be used as a parallel classification technique, where a massive database that cannot fit into main computer memory is partitioned into disjoint subsets for a more efficient analysis. In the proposed method, at each boosting round the classifiers are first learned from disjoint datasets and then exchanged amongst the sites. Finally the classifiers are combined into a weighted voting ensemble on each disjoint data set. The ensemble that is applied to an unseen test set represents an ensemble of ensembles built on all distributed sites. In experiments performed on four large data sets the proposed distributed boosting method achieved classification accuracy comparable or even slightly better than the standard boosting algorithm while requiring less memory and less computational time. In addition, the communication overhead of the distributed boosting algorithm is very small making it a viable alternative to the standard boosting for large-scale databases.", "1 Introduction.- 1.1 Distributed Detection Systems.- 1.2 Outline of the Book.- 2 Elements of Detection Theory.- 2.1 Introduction.- 2.2 Bayesian Detection Theory.- 2.3 Minimax Detection.- 2.4 Neyman-Pearson Test.- 2.5 Sequential Detection.- 2.6 Constant False Alarm Rate (CFAR) Detection.- 2.7 Locally Optimum Detection.- 3 Distributed Bayesian Detection: Parallel Fusion Network.- 3.1 Introduction.- 3.2 Distributed Detection Without Fusion.- 3.3 Design of Fusion Rules.- 3.4 Detection with Parallel Fusion Network.- 4 Distributed Bayesian Detection: Other Network Topologies.- 4.1 Introduction.- 4.2 The Serial Network.- 4.3 Tree Networks.- 4.4 Detection Networks with Feedback.- 4.5 Generalized Formulation for Detection Networks.- 5 Distributed Detection with False Alarm Rate Constraints.- 5.1 Introduction.- 5.2 Distributed Neyman-Pearson Detection.- 5.3 Distributed CFAR Detection.- 5.4 Distributed Detection of Weak Signals.- 6 Distributed Sequential Detection.- 6.1 Introduction.- 6.2 Sequential Test Performed at the Sensors.- 6.3 Sequential Test Performed at the Fusion Center.- 7 Information Theory and Distributed Hypothesis Testing.- 7.1 Introduction.- 7.2 Distributed Detection Based on Information Theoretic Criterion.- 7.3 Multiterminal Detection with Data Compression.- Selected Bibliography." ] }
cs0601089
1912679131
This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.
Ongoing research in the machine learning community seeks to design statistically sound learning algorithms that scale to large data sets (e.g., @cite_10 and references therein). One approach is to decompose the database into smaller chunks", and subsequently parallelize the learning process by assigning distinct processors agents to each of the chunks. In principle, algorithms for parallelizing learning may be useful for distributed learning, and vice-versa. To our knowledge, there has not been an attempt to parallelize reproducing kernel methods using the approach outlined below.
{ "cite_N": [ "@cite_10" ], "mid": [ "2135106139" ], "abstract": [ "Very high dimensional learning systems become theoretically possible when training examples are abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm should at least take a brief look at each example. But should all examples be given equal attention?This contribution proposes an empirical answer. We first present an online SVM algorithm based on this premise. LASVM yields competitive misclassification rates after a single pass over the training examples, outspeeding state-of-the-art SVM solvers. Then we show how active example selection can yield faster training, higher accuracies, and simpler models, using only a fraction of the training example labels." ] }
cs0601089
1912679131
This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.
A related area of research lies in the study of ensemble methods in machine learning; examples of these techniques include bagging, boosting, and mixtures of experts (e.g., @cite_13 and others). Typically, the focus of these works is on the statistical and algorithmic advantages of learning with an ensemble and not on the problem of learning under communication constraints. To our knowledge, the methods derived here have not been derived in this related context, though future work in distributed learning may benefit from the many insights gleaned from this important area.
{ "cite_N": [ "@cite_13" ], "mid": [ "1988790447" ], "abstract": [ "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line." ] }
cs0601089
1912679131
This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.
The research presented here generalizes the model and algorithm discussed in @cite_15 , which focused exclusively on the WSN application. Distinctions between the current and former work are discussed in more detail below.
{ "cite_N": [ "@cite_15" ], "mid": [ "1834302797" ], "abstract": [ "Wireless sensor networks (WSNs) have attracted considerable attention in recent years and motivate a host of new challenges for distributed signal processing. The problem of distributed or decentralized estimation has often been considered in the context of parametric models. However, the success of parametric methods is limited by the appropriateness of the strong statistical assumptions made by the models. In this paper, a more flexible nonparametric model for distributed regression is considered that is applicable in a variety of WSN applications including field estimation. Here, starting with the standard regularized kernel least-squares estimator, a message-passing algorithm for distributed estimation in WSNs is derived. The algorithm can be viewed as an instantiation of the successive orthogonal projection (SOP) algorithm. Various practical aspects of the algorithm are discussed and several numerical simulations validate the potential of the approach." ] }
cs0601127
2159112912
The access graph model for paging, defined by (, 1991) and studied in (, 1992) has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. We present a truly online strongly competitive paging algorithm in the access graph model that does not have any prior information on the access sequence. We give both strongly competitive deterministic and strongly competitive randomized algorithms. Our algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space, i.e., no more memory than needed to store the virtual translation tables for pages in memory. In fact, we can reduce this to O(k log k) bits using appropriate probabilistic data structures. We also extend the locality of reference concept captured by the access graph model to allow changes in the behavior of the underlying process. We formalize this by introducing the concept of an "extended access graph". We consider a graph parameter spl Delta that captures the degree of change allowed. We study this new model and give algorithms that are strongly competitive for the (unknown) extended access graph. We can do so for almost all values of spl Delta for which it is possible.
Borodin al @cite_8 also consider deterministic uniform paging algorithms. They prove the existence of an optimal paging algorithm in PSPACE( @math ). They give a natural uniform paging algorithm, called , and prove that obtains a competitive ratio no worse than @math times the asymptotic competitive ratio for the graph. This result is improved in a paper by Irani, Karlin and Phillips @cite_6 in which it is shown that is very strongly competitive. The same paper also presents a very strongly competitive algorithm for a sub-class of access graphs, called .
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2073713788", "2069162168" ], "abstract": [ "What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they conform to a notion of locality of reference given by an arbitrary access graph. We first consider optimal algorithms for undirected access graphs. [ Proc. 23rd ACM Symposium on Theory of Computing, 1991, pp. 249--259] define an algorithm, called FAR, and prove that it is within a logarithmic factor of the optimal online algorithm. We prove that FAR is in fact strongly competitive, i.e., within a constant factor of the optimum. For directed access graphs, we present an algorithm that is strongly competitive on structured program graphs---graphs that model a subset of the request sequences of structured programs.", "The Sleator-Tarjan competitive analysis of paging (Comm. ACM28 (1985), 202-208) gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical comptitiveness of LRU is much larger than observed in practice, In addition, we would like to address the following important question: given some knowledge of a program?s reference pattern, can we use it to improve paging performance on that program? We address these concerns by introducing an important practical element that underlies the philosophy behind paging: locality of reference. We devise a graph-theoretical model, the access graph, for studying locality of reference. We use it to prove results that address the practical concerns mentioned above, In addition, we use our model to address the following questions: How well is LRU likely to perform on a given program? Is there a universal paging algorithm that achieves (nearly) the best possible paging performance on every program? We do so without compromising the benefits of the Sleator-Tarjan model, while bringing it closer to practice." ] }
cs0601127
2159112912
The access graph model for paging, defined by (, 1991) and studied in (, 1992) has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. We present a truly online strongly competitive paging algorithm in the access graph model that does not have any prior information on the access sequence. We give both strongly competitive deterministic and strongly competitive randomized algorithms. Our algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space, i.e., no more memory than needed to store the virtual translation tables for pages in memory. In fact, we can reduce this to O(k log k) bits using appropriate probabilistic data structures. We also extend the locality of reference concept captured by the access graph model to allow changes in the behavior of the underlying process. We formalize this by introducing the concept of an "extended access graph". We consider a graph parameter spl Delta that captures the degree of change allowed. We study this new model and give algorithms that are strongly competitive for the (unknown) extended access graph. We can do so for almost all values of spl Delta for which it is possible.
Fiat and Rosen @cite_11 present an access graph based heuristic that is truly online and makes use of a (weighted) dynamic access graph. In this sense we emulate their concept. While the Fiat and Rosen algorithm is experimentally interesting in that it seems to beat , it is certainly not strongly competitive, and is known to have a competitive ratio of @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2068714069" ], "abstract": [ "In this paper we devise new paging heuristics motivated by the access graph model of paging. Unlike the access graph model and the related Markov paging model our heuristics are truly on-line in that we do not assume any prior knowledge of the program just about to be executed. The Least Recently Used heuristic for paging is remarkably good, and is known experimentally to be superior to many of the suggested alternatives on real program traces. Experiments we ve performed suggest that our heuristics beat LRU fairly consistently, over a wide range of cache sizes and programs. The number of page faults can be as low as 1 2 the number of page faults for LRU and is on average 7 to 9 percent less than the number of faults for LRU. (Depending on how this average is computed.) We have built a program tracer that gives the page access sequence for real program executions of 200 - 3,300 thousand page access requests, and our simulations are based on 25 of these program traces. Overall, we have performed several thousand such simulations. While we have no real evidence to suggest that the programs we ve traced are typical in any sense, we havemore » made use of an experimental open_quotes protocol close_quotes designed to avoid experimenter bias. We strongly emphasize that our results are only preliminary and that much further work needs to be done.« less" ] }
quant-ph0512258
2951669051
We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols.
One of the most popular proof techniques was proposed by Shor and Preskill @cite_37 , based on ideas of Lo and Chau @cite_66 . It uses a connection between key distribution and entanglement purification @cite_55 entanglement purification pointed out by Ekert @cite_15 (see also @cite_13 ). The proof technique of Shor and Preskill was later refined and applied to other protocols (see, e.g., @cite_56 @cite_3 ).
{ "cite_N": [ "@cite_37", "@cite_55", "@cite_3", "@cite_56", "@cite_15", "@cite_13", "@cite_66" ], "mid": [ "2071764857", "", "", "2165923712", "2051051926", "1997124098", "1970006170" ], "abstract": [ "We prove that the 1984 protocol of Bennett and Brassard (BB84) for quantum key distribution is secure. We first give a key distribution protocol based on entanglement purification, which can be proven secure using methods from Lo and Chau's proof of security for a similar protocol. We then show that the security of this protocol implies the security of BB84. The entanglement purification based protocol uses Calderbank-Shor-Steane codes, and properties of these codes are used to remove the use of quantum computation from the Lo-Chau protocol.", "", "", "Shor and Preskill (see Phys. Rev. Lett., vol.85, p.441, 2000) have provided a simple proof of security of the standard quantum key distribution scheme by Bennett and Brassard (1984) by demonstrating a connection between key distribution and entanglement purification protocols (EPPs) with one-way communications. Here, we provide proofs of security of standard quantum key distribution schemes, Bennett and Brassard and the six-state scheme, against the most general attack, by using the techniques of two-way entanglement purification. We demonstrate clearly the advantage of classical post-processing with two-way classical communications over classical post-processing with only one-way classical communications in quantum key distribution (QKD). This is done by the explicit construction of a new protocol for (the error correction detection and privacy amplification of) Bennett and Brassard that can tolerate a bit error rate of up to 18.9 , which is higher than what any Bennett and Brassard scheme with only one-way classical communications can possibly tolerate. Moreover, we demonstrate the advantage of the six-state scheme over Bennett and Brassard by showing that the six-state scheme can strictly tolerate a higher bit error rate than Bennett and Brassard. In particular, our six-state protocol can tolerate a bit error rate of 26.4 , which is higher than the upper bound of 25 bit error rate for any secure Bennett and Brassard protocol. Consequently, our protocols may allow higher key generation rate and remain secure over longer distances than previous protocols. Our investigation suggests that two-way entanglement purification is a useful tool in the study of advantage distillation, error correction, and privacy amplification protocols.", "Practical application of the generalized Bells theorem in the so-called key distribution process in cryptography is reported. The proposed scheme is based on the Bohms version of the Einstein-Podolsky-Rosen gedanken experiment and Bells theorem is used to test for eavesdropping. © 1991 The American Physical Society.", "Ekert has described a cryptographic scheme in which Einstein-Podolsky-Rosen (EPR) pairs of particles are used to generate identical random numbers in remote places, while Bell's theorem certifies that the particles have not been measured in transit by an eavesdropper. We describe a related but simpler EPR scheme and, without invoking Bell's theorem, prove it secure against more general attacks, including substitution of a fake EPR source. Finally we show our scheme is equivalent to the original 1984 key distribution scheme of Bennett and Brassard, which uses single particles instead of EPR pairs.", "Quantum key distribution is widely thought to offer unconditional security in communication between two users. Unfortunately, a widely accepted proof of its security in the presence of source, device, and channel noises has been missing. This long-standing problem is solved here by showing that, given fault-tolerant quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure. The proof is reduced from a noisy quantum scheme to a noiseless quantum scheme and then from a noiseless quantum scheme to a noiseless classical scheme, which can then be tackled by classical probability theory." ] }
quant-ph0512258
2951669051
We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols.
@cite_54 , we have presented a general method for proving the security of QKD which does not rely on entanglement purification. Instead, it is based on a result on the security of privacy amplification in the context of quantum adversaries @cite_30 @cite_52 privacy amplification . Later, this method has been extended and applied to prove the security of new variants of the BB84 and the six-state protocol @cite_44 @cite_67 . @cite_44 @cite_67 we use an alternative technique (different from the quantum de Finetti theorem) to show that collective attacks are equivalent to coherent attacks for certain QKD protocols. The security proof given in this thesis is based on ideas developed in these papers.
{ "cite_N": [ "@cite_30", "@cite_67", "@cite_54", "@cite_52", "@cite_44" ], "mid": [ "2119098193", "2079729767", "", "2114805880", "1980534149" ], "abstract": [ "We address the question whether quantum memory is more powerful than classical memory. In particular, we consider a setting where information about a random n-bit string X is stored in s classical or quantum bits, for s<n, i.e., the stored information is bound to be only partial. Later, a randomly chosen predicate F about X has to be guessed using only the stored information. The maximum probability of correctly guessing F(X) is then compared for the cases where the storage device is classical or quantum mechanical, respectively. We show that, despite the fact that the measurement of quantum bits can depend arbitrarily on the predicate F, the quantum advantage is negligible already for small values of the difference n-s. Our setting generalizes the setting of who considered the problem of guessing an arbitrary bit (i.e., one of the n bits) of X. An implication for cryptography is that privacy amplification by universal hashing remains essentially equally secure when the adversary's memory is allowed to be quantum rather than only classical. Since privacy amplification is a main ingredient of many quantum key distribution (QKD) protocols, our result can be used to prove the security of QKD in a generic way.", "We investigate a general class of quantum key distribution (QKD) protocols using one-way classical communication. We show that full security can be proven by considering only collective attacks. We derive computable lower and upper bounds on the secret-key rate of those QKD protocols involving only entropies of two-qubit density operators. As an illustration of our results, we determine new bounds for the Bennett-Brassard 1984, the 6-state, and the Bennett 1992 protocols. We show that in all these cases the first classical processing that the legitimate partners should apply consists in adding noise.", "", "Privacy amplification is the art of shrinking a partially secret string Z to a highly secret key S. We show that, even if an adversary holds quantum information about the initial string Z, the key S obtained by two-universal hashing is secure, according to a universally composable security definition. Additionally, we give an asymptotically optimal lower bound on the length of the extractable key S in terms of the adversary's (quantum) knowledge about Z. Our result has applications in quantum cryptography. In particular, it implies that many of the known quantum key distribution protocols are universally composable.", "We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel." ] }
quant-ph0512258
2951669051
We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols.
Our new approach for proving the security of QKD has already found various applications. For example, it is used for the analysis of protocols based on continuous systems continuous variable QKD as well as to improve the analysis of known (practical) protocols practical implementation exploiting the fact that an adversary cannot control the noise in the physical devices owned by Alice and Bob (see, e.g., @cite_38 @cite_41 @cite_4 ).
{ "cite_N": [ "@cite_41", "@cite_38", "@cite_4" ], "mid": [ "", "2062768305", "2953325427" ], "abstract": [ "", "We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et. al. [quant-ph 0402131] to obtain unconditionally secure rates for these protocols.", "We study quantum key distribution with standard weak coherent states and show, rather counter-intuitively, that the detection events originated from vacua can contribute to secure key generation rate, over and above the best prior art result. Our proof is based on a communication complexity quantum memory argument." ] }
cs0512060
2950348024
We propose efficient distributed algorithms to aid navigation of a user through a geographic area covered by sensors. The sensors sense the level of danger at their locations and we use this information to find a safe path for the user through the sensor field. Traditional distributed navigation algorithms rely upon flooding the whole network with packets to find an optimal safe path. To reduce the communication expense, we introduce the concept of a skeleton graph which is a sparse subset of the true sensor network communication graph. Using skeleton graphs we show that it is possible to find approximate safe paths with much lower communication cost. We give tight theoretical guarantees on the quality of our approximation and by simulation, show the effectiveness of our algorithms in realistic sensor network situations.
Navigating a sensor field in the presence of danger zones is a problem which is similar to path planning in the presence of obstacles. There are two obvious ways one can approach this problem: a greedy geographic scheme similar to GPSR routing @cite_13 and exhaustive search. In a geographic scheme, one would greedily move towards the destination and traverse around the danger zones encountered on the way. This scheme has very low communication overhead, but can lead to highly suboptimal paths as shown in Fig. . The global exhaustive search algorithm floods the network with packets to carry out a Breadth-First-Search (BFS) on the communication graph. Obviously this algorithm is optimal in terms of path length, but very expensive in terms of communication cost.
{ "cite_N": [ "@cite_13" ], "mid": [ "2101963262" ], "abstract": [ "We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks." ] }
cs0512060
2950348024
We propose efficient distributed algorithms to aid navigation of a user through a geographic area covered by sensors. The sensors sense the level of danger at their locations and we use this information to find a safe path for the user through the sensor field. Traditional distributed navigation algorithms rely upon flooding the whole network with packets to find an optimal safe path. To reduce the communication expense, we introduce the concept of a skeleton graph which is a sparse subset of the true sensor network communication graph. Using skeleton graphs we show that it is possible to find approximate safe paths with much lower communication cost. We give tight theoretical guarantees on the quality of our approximation and by simulation, show the effectiveness of our algorithms in realistic sensor network situations.
The concept of minimum exposure path were introduced by Meguerdichian et. al. @cite_6 . Veltri et. al. @cite_8 has given heuristics to distributedly compute minimal and maximal exposure paths in sensor networks. Path planning in the context of sensor networks was addressed by Li et. al. @cite_3 where they consider the problem of finding minimum exposure path. Their approach involves exhaustive search over the whole network to find the minimal exposure path. Recently Liu et.al. @cite_2 have used the concept of searching a sparse subgraph to implement algorithms for resource discovery in sensor networks. This work, which was carried out independently of us, however doesn't address the problem of path finding when parts of the sensor network is blocked due to danger. Some of our work is inspired by the mesh generation problem @cite_4 @cite_5 in computational geometry.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_3", "@cite_6", "@cite_2", "@cite_5" ], "mid": [ "180909918", "2167477163", "2039252979", "1991523989", "2022390945", "1659568777" ], "abstract": [ "", "Sensor networks not only have the potential to change the way we use, interact with, and view computers, but also the way we use, interact with, and view the world around us. In order to maximize the effectiveness of sensor networks, one has to identify, examine, understand, and provide solutions for the fundamental problems related to wireless embedded sensor networks. We believe that one of such problems is to determine how well the sensor network monitors the instrumented area. These problems are usually classified as coverage problems. There already exist several methods that have been proposed to evaluate a sensor network's coverage.We start from one of such method and provide a new approach to complement it. The method of using the minimal exposure path to quantify coverage has been optimally solved using a numerical approximation approach. The minimal exposure path can be thought of as the worst-case coverage of a sensor network. Our first goal is to develop an efficient localized algorithm that enables a sensor network to determine its minimal exposure path. The theoretical highlight of this paper is the closed-form solution for minimal exposure in the presence of a single sensor. This solution is the basis for the new and significantly faster localized approximation algorithm that reduces the theoretical complexity of the previous algorithm. On the other hand, we introduce a new coverage problem - the maximal exposure path - which is in a sense the best-case coverage path for a sensor network. We prove that the maximal exposure path problem is NP-hard, and thus, we provide heuristics to generate approximate solutions.In addition, we demonstrate the effectiveness of our algorithms through several simulations. In the case of the minimal single-source minimal exposure path, we use variational calculus to determine exact solutions. For the case of maximal exposure, we use networks with varying numbers of sensors and exposure models.", "We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region. The sensor network models the danger levels sensed across its area and has the ability to adapt to changes. It represents the dangerous areas as obstacles. A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas. We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors.", "Wireless ad-hoc sensor networks will provide one of the missing connections between the Internet and the physical world. One of the fundamental problems in sensor networks is the calculation of coverage. Exposure is directly related to coverage in that it is a measure of how well an object, moving on an arbitrary path, can be observed by the sensor network over a period of time. In addition to the informal definition, we formally define exposure and study its properties. We have developed an efficient and effective algorithm for exposure calculation in sensor networks, specifically for finding minimal exposure paths. The minimal exposure path provides valuable information about the worst case exposure-based coverage in sensor networks. The algorithm works for any given distribution of sensors, sensor and intensity models, and characteristics of the network. It provides an unbounded level of accuracy as a function of run time and storage. We provide an extensive collection of experimental results and study the scaling behavior of exposure and the proposed algorithm for its calculation.", "In this paper we investigate efficient strategies for supporting on-demand information dissemination and gathering in large-scale vwireless sensor networks. In particular, we propose a \"comb-needle\" discovery support model resembling an ancient method: use a comb to help find a needle in sands or a haystack. The model combines push and pull for information dissemination and gathering. The push component features data duplication in a linear neighborhood of each node. The pull component features a dynamic formation of an on-demand routing structure resembling a comb. The comb-needle model enables us to investigate the cost of a spectrum of push and pull combinations for supporting discovery and query in large scale sensor networks. Our result shows that the optimal routing structure depends on the frequency of query occurrence and the spatial-temporal frequency of related events in the network. The benefit of balancing push and pull for discovery in large scale geometric networks are demonstrated. We also raise the issue of query coverage in unreliable networks and investigate how redundancy can improve the coverage via both theoretical analysis and simulation. Last, we study adaptive strategies for the case where the frequencies of query and events are unknown a priori and time-varying.", "" ] }
cs0512069
1646304050
Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches.
In regards to archiving websites, organizations like the Internet Archive and national libraries are currently engaged in archiving the external (or client's) view of selected websites @cite_26 and improving that process by building better web crawlers and tools @cite_7 . Systems have been developed to ensure long-term access to Web content within repositories and digital libraries @cite_36 .
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_7" ], "mid": [ "2088429233", "179202249", "2072475455" ], "abstract": [ "LOCKSS (Lots Of Copies Keep Stuff Safe) is a tool designed for libraries to use to ensure their community's continued access to web-published scientific journals. LOCKSS allows libraries to take custody of the material to which they subscribe, in the same way they do for paper, and to preserve it. By preserving it they ensure that, for their community, links and searches continue to resolve to the published material even if it is no longer available from the publisher. Think of it as the digital equivalent of stacks where an authoritative copy of material is always available rather than the digital equivalent of an archive. LOCKSS allows libraries to run web caches for specific journals. These caches collect content as it is published and are never flushed. They cooperate in a peer-to-peer network to detect and repair damaged or missing pages. The caches run on generic PC hardware using open-source software and require almost no skilled administration, making the cost of preserving a journal manageable.", "", "Recently the Library of Congress began developing a strategy for the preservation of digital content. Efforts have focused on the need to select, harvest, describe, access and preserve Web resources. This poster focuses on the Library's initial investigation and evaluation of Web harvesting software tools." ] }
cs0512069
1646304050
Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches.
Numerous systems have been built to archive individual websites and web pages. InfoMonitor archives the server-side components (e.g., CGI scripts and datafiles) and filesystem of a web server @cite_34 . It requires an administrator to configure the system and a separate server with adequate disk space to hold the archives. Other systems like TTApache @cite_27 and iPROXY @cite_16 archive requested pages from a web server but not the server-side components. TTApache is an Apache module which archives different versions of web resources as they are requested from a web server. Users can view archived content through specially formatted URLs. iPROXY is similar to TTApache except that it uses a proxy server and archives requested resources for the client from any number of web servers. A similar approach using a proxy server with a content management system for storing and accessing Web resources was proposed in @cite_21 . Commercial systems like Furl ( http: furl.net ) and Spurl.net ( http: spurl.net ) also allow users to archive selected web resources that they deem important.
{ "cite_N": [ "@cite_27", "@cite_34", "@cite_21", "@cite_16" ], "mid": [ "2117044215", "2024882820", "2005783315", "2091712605" ], "abstract": [ "This paper presents a transaction-time HTTP server, called TTApache that supports document versioning. A document often consists of a main file formatted in HTML or XML and several included files such as images and stylesheets. A change to any of the files associated with a document creates a new version of that document. To construct a document version history, snapshots of the document's files are obtained over time. Transaction times are associated with each file version to record the version's lifetime. The transaction time is the system time of the edit that created the version. Accounting for transaction time is essential to supporting audit queries that delve into past document versions and differential queries that pinpoint differences between two versions. TTApache performs automatic versioning when a document is read thereby removing the burden of versioning from document authors. Since some versions may be created but never read, TTApache distinguishes between known and assumed versions of a document. TTApache has a simple query language to retrieve desired versions. A browser can request a specific version, or the entire history of a document. Queries can also rewrite links and references to point to current or past versions. Over time, the version history of a document continually grows. To free space, some versions can be vacuumed. Vacuuming a version however changes the semantics of requests for that version. This paper presents several policies for vacuuming versions and strategies for accounting for vacuumed versions in queries.", "It is important to provide long-term preservation of digital data even when those data are stored in an unreliable system such as a filesystem, a legacy database, or even the World Wide Web. In this paper we focus on the problem of archiving the contents of a Web site without disrupting users who maintain the site. We propose an archival storage system, the InfoMonitor, in which a reliable archive is integrated with an unmodified existing store. Implementing such a system presents various challenges related to the mismatch of features between the components such as differences in naming and data manipulation operations. We examine each of these issues as well as solutions for the conflicts that arise. We also discuss our experience using the InfoMonitor to archive the Stanford Database Group's Web site.", "The growth of the World Wide Web holds great promise for universal online information access. New information is constantly being made available for users. However, the information accessible on the Web changes constantly. These changes may occur as modifications to both the content and the location of previously existing Web resources. As these changes occur, the accessibility to past versions of such Web resources is often lost. This paper presents an approach to provide content persistence of Web resources by organizing collections of historical Web resources in a distributed configuration management system to allow online, read-only access to the versioned resources.", "The Web contains so much information that it is almost beyond measure. How do users manage the useful information that they have seen while screening out the rest that doesn't interest them? Bookmarks help, but bookmarking a page doesn't guarantee that it will be available forever. Search engines are becoming more powerful, but they can't be customized based on the access history of individual users. This paper suggests that a better alternative to managing web information is through a middleware approach based on iPROXY, a programmable proxy server. iPROXY offers a suite of archiving, retrieval, and searching services. It can extend a URL to include commands that archive and retrieve pages. Its modular architecture allows users to plug in new features without having to change existing browsers or servers. Once installed on a network, iPROXY can be accessed by users using different browsers and devices. Internet service providers who offer customers iPROXY will be free to develop new services without having to wait for the dominant browsers to be updated." ] }
cs0512069
1646304050
Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches.
Estimates of SE coverage of the indexable Web have been performed most recently in @cite_19 , but no measurement of SE cache sizes or types of files stored in the SE caches has been performed. We are also unaware of any research that documents the crawling and caching behavior of commercial SEs.
{ "cite_N": [ "@cite_19" ], "mid": [ "2080676333" ], "abstract": [ "In this short paper we estimate the size of the public indexable web at 11.5 billion pages. We also estimate the overlap and the index size of Google, MSN, Ask Teoma and Yahoo!" ] }
cs0511008
1770382502
A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.
Table summarizes the properties that are provided by the combination of a traffic model, chosen from t.a.c, v.b.c and m.b.c stochastic arrival curves, and a server model, chosen from weak stochastic service curve and stochastic service curve, without any additional constraints on the traffic model or the server model. In Section , we have discussed that under the context of network calculus, most traffic models used in the literature @cite_32 @cite_7 @cite_4 @cite_14 @cite_1 @cite_18 @cite_5 @cite_12 @cite_28 @cite_21 @cite_11 belong to t.a.c and v.b.c stochastic arrival curve, and most server models @cite_17 @cite_7 @cite_1 @cite_18 @cite_5 @cite_12 @cite_11 belong to weak stochastic service curve. Table shows that without additional constraints, these works can only support part of the five required properties for the stochastic network calculus. In contrast, with m.b.c stochastic arrival curve and stochastic service curve, all these properties have been proved in this section.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_28", "@cite_21", "@cite_1", "@cite_32", "@cite_17", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1987582408", "2142917009", "2022940918", "95741843", "", "1600548747", "2120228516", "2109745615", "2002425684", "10991463", "1484844389", "2082841997" ], "abstract": [ "This paper establishes a link between two principal tools for the analysis of network traffic, namely, effective bandwidth and network calculus. It is shown that a general version of effective bandwidth can be expressed within the framework of a probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds. By formulating well-known effective bandwidth expressions in terms of probabilistic envelope functions, the developed network calculus can be applied to a wide range of traffic types, including traffic that has self-similar characteristics. As applications, probabilistic lower bounds are presented on the service given by three different scheduling algorithms: static priority, earliest deadline first, and generalized processor sharing. Numerical examples show the impact of specific traffic models and scheduling algorithms on the multiplexing gain in a network.", "We introduce the concept of generalized stochastically bounded burstiness (gSBB) for Internet traffic, the tail distribution of whose burstiness can be bounded by a decreasing function in a function class with few restrictions. This new concept extends the concept of stochastically bounded burstiness (SBB) introduced by previous researchers to a much larger extent - while the SBB model can apply to Gaussian self-similar input processes, such as fractional Brownian motion, gSBB traffic contains non-Gaussian self-similar input processes, such as spl alpha -stable self-similar processes, which are not SBB in general. We develop a network calculus for gSBB traffic. We characterize gSBB traffic by the distribution of its queue size. We explore the property of sums of gSBB traffic and the relation of input and output processes. We apply this calculus to a work-conserving system shared by a number of gSBB sources, to analyze the behavior of output traffic for each source and to estimate the probabilistic bounds for delays. We expect this new calculus to be of particular interest in the implementation of services with statistical qualitative guarantees.", "A network calculus is developed for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus is useful for a large class of input processes, including important processes exhibiting \"subexponentially bounded burstiness\" such as fractional Brownian motion. Moreover, it allows judicious capture of the salient features of real-time traffic, such as the \"cell\" and \"burst\" characteristics of multiplexed traffic. This accurate characterization is achieved by setting the bounding function as a sum of exponentials.", "", "", "Many communication networks such as wireless networks only provide stochastic service guarantees. For analyzing stochastic service guarantees, research efforts have been made in the past few years to develop stochastic network calculus, a probabilistic version of (min, +) deterministic network calculus. However, many challenges have made the development difficult. Some of them are closely related to server modeling, which include output characterization, concatenation property, stochastic backlog guarantee, stochastic delay guarantee, and per-flow service under aggregation. In this paper, we propose a server model, called stochastic service curve to facilitate stochastic service guarantee analysis. We show that with the concept of stochastic service curve, these challenges can be well addressed. In addition, we introduce strict stochastic server to help find the stochastic service curve of a stochastic server, which characterizes the service of the server by two stochastic processes: an ideal service process and an impairment process.", "The deterministic network calculus offers an elegant framework for determining delays and backlog in a network with deterministic service guarantees to individual traffic flows. A drawback of the deterministic network calculus is that it only provides worst-case bounds. Here we present a network calculus for statistical service guarantees, which can exploit the statistical multiplexing gain of sources. We introduce the notion of an effective service curve as a probabilistic bound on the service received by an individual flow, and construct an effective service curve for a network where capacities are provisioned exclusively to aggregates of flows. Numerical examples demonstrate that the calculus is able to extract a significant amount of multiplexing gain in networks with a large number of flows.", "A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed. The approach is based on properly bounding the probability distribution functions of the system input processes. The suggested bounds which are decaying exponentials, possess three convenient properties. When the inputs to an isolated network element are all bounded, they result in bounded outputs and assure that the delays and queues in this element have exponentially decaying distributions. In some network settings, bounded inputs result in bounded outputs. Natural traffic processes can be shown to satisfy such bounds. Consequently, this method enables the analysis of various previously intractable setups. Sufficient conditions are provided for the stability of such networks, and derive upper bounds for the parameters of network performance are derived. >", "In most network models for quality of service support, the communication links interconnecting the switches and gateways are assumed to have fixed bandwidth and zero error rate. This assumption of steadiness, especially in a heterogeneous internet-working environment, might be invalid owing to subnetwork multiple-access mechanism, link-level flow error control, and user mobility. Techniques are presented in this paper to characterize and analyze work-conserving communication nodes with varying output rate. In the deterministic approach, the notion of \"fluctuation constraint,\" analogous to the \"burstiness constraint\" for traffic characterization, is introduced to characterize the node. In the statistical approach, the variable-rate output is modelled as an \"exponentially bounded fluctuation\" process in a way similar to the \"exponentially bounded burstiness\" method for traffic modelling. Based on these concepts, deterministic and statistical bounds on queue size and packet delay in isolated variable-rate communication server-nodes are derived, including cases of single-input and multiple-input under first-come-first-serve queueing. Queue size bounds are shown to be useful for buffer requirement and packet loss probability estimation at individual nodes. Our formulations also facilitate the computation of end-to-end performance bounds across a feedforward network of variable-rate server-nodes. Several numerical examples of interest are given in the discussion.", "The issue of Quality of Service (QoS) performance analysis in packet switching networks has drawn a lot of attention in the networking community in recent years. There is a lot of work including an elegant theory under the name of network calculus focusing on analysis of deterministic worst case QoS performance bounds. In the meantime, other researchers have studied the stochastic QoS performance for specific schedulers. As yet, there has been no systematic investigation and analysis of end-to-end stochastic QoS performance. On the other hand, most of the previous work on deterministic QoS analysis or stochastic QoS analysis only considered a server which provides deterministic service, i.e. deterministically bounded rate service. Few works have considered the behavior of a stochastic server providing input flows with variable rate service. In this report, we propose a stochastic network calculus to systematically analyze the end-to-end stochastic QoS performance of a system with stochastically bounded input traffic over a series of deterministic and stochastic servers. The proposed framework is also applied to analyze per-flow stochastic QoS performance since a server serving an aggregate of flows can be regarded as a stochastic server for individual flows within the aggregate under aggregate scheduling. Keywords Stochastic Modeling, Network Calculus, Quality of Service", "We propose a probabilistic characterization of network traffic. This characterization can handle traffic with heavy-tailed distributions in performance analysis. We show that queue size, output traffic, virtual delay, aggregate traffic, etc. at various points in a network can easily be characterized within the framework. This characterization is measurable and allows for a simple probabilistic method for regulating network traffic. All of these properties of the proposed characterization enable a systematic approach for providing end-to-end probabilistic QoS guarantees", "The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by O(Hlog H), where H is the number of nodes traversed by a flow. Using currently available techniques that compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by O(H3)." ] }
cs0511008
1770382502
A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.
One type uses a sequence of random variables to stochastically bound the arrival process @cite_34 or the service process @cite_38 . Similar properties as (P.1), (P.3), (P.4) and (P.5) have been studied @cite_34 @cite_38 . These studies generally need the independence assumption. Under this type of traffic and service models, several problems remain open, which are out of the scope of this paper. One is the concatenation property (P.2), another is the general case analysis and the third is researching designing approaches to map known traffic and service characterizations to the required sequences of random variables.
{ "cite_N": [ "@cite_38", "@cite_34" ], "mid": [ "2062706778", "1985431233" ], "abstract": [ "Networks that support multiple services through \"link-sharing\" must address the fundamental conflicting requirement between isolation among service classes to satisfy each class' quality of service requirements, and statistical sharing of resources for efficient network utilization. While a number of service disciplines have been devised which provide mechanisms to both isolate flows and fairly share excess capacity, admission control algorithms are needed which exploit the effects of inter-class resource sharing. In this paper, we develop a framework of using statistical service envelopes to study inter-class statistical resource sharing. We show how this service envelope enables a class to over-book resources beyond its deterministically guaranteed capacity by statistically characterizing the excess service available due to fluctuating demands of other service classes. We apply our techniques to several multi-class schedulers, including generalized processor sharing, and design new admission control algorithms for multi-class link-sharing environments. We quantify the utilization gains of our approach with a set of experiments using long traces of compressed video.", "We present a technique for computing upper bounds on the distribution of individual per-session performance measures such as delay and buffer occupancy for networks in which sessions may be routed over several “hops.” Our approach is based on first stochastically bounding the distribution of the number of packets (or cells) which can be generated by each traffic source over various lengths of time and then “pushing” these bounds (which are then shown to hold over new time interval lengths at various network queues) through the network on a per-session basis. Session performance bounds can then be computed once the stochastic bounds on the arrival process have been characterized for each session at all network nodes. A numerical example is presented and the resulting distributional bounds compared with simulation as well as with a point-valued worst-case performance bound." ] }
cs0511008
1770382502
A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.
Another type is built upon moments or moment generating functions. This type was initially used for traffic @cite_10 @cite_8 and has also been extended to service @cite_20 @cite_15 . Independence assumption is generally required between arrival and service processes. Extensive study has been conducted for deriving the characteristics of a process under this type of model from some known characterization of the process @cite_10 @cite_24 @cite_20 . Main open problems for this type are the concatenation property (P.2) and the general case analysis. Although these problems are out of the scope of this paper, we prove in Section results that relate the moment generating function model to the proposed m.b.c stochastic arrival curve and stochastic service curve. These results will allow to further relate known traffic service characterization to the proposed traffic and service models in this paper.
{ "cite_N": [ "@cite_8", "@cite_24", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2090888999", "2112919802", "2115632051", "2111764355", "1978905175" ], "abstract": [ "A crucial problem for the efficient design and management of integrated services networks is how to best allocate network resources for heterogeneous and bursty traffic streams in multiplexers that support prioritized service disciplines. In this paper, we introduce a new approach for determining per-connection performance parameters such as delay-bound violation probability and loss probability in multi-service networks. The approach utilizes a traffic characterization consisting of the variances of a stream's rate distribution over multiple interval lengths, which captures its burstiness properties and autocorrelation structure. From this traffic characterization, we provide a simple and efficient resource allocation algorithm by deriving stochastic delay-bounds for static priority schedulers and employing a Gaussian approximation over intervals. To evaluate the scheme, we perform trace-driven simulation experiments with long traces of MPEG-compressed video and show that our approach is accurate enough to capture most of the inherent statistical multiplexing gain, achieving average network utilizations of up to 90 for these traces and substantially outperforming previous \"effective bandwidth\" techniques.", "Considers stochastic linear systems under the max-plus algebra. For such a system, the states are governed by the recursive equation X sub n =A sub n spl otimes X sub n-1 spl oplus U sub n with the initial condition condition X sub 0 =x sub 0 . By transforming the linear system under the max-plus algebra into a sublinear system under the usual algebra, we establish various exponential upper bounds for the tail distributions of the states X sub n under the independently identically distributed (i.i.d.) assumption on (A sub n ,U sub n ) sub 1 n spl ges 1 and a couple of regularity conditions on (A sub 1 ,U sub 1 ) and the initial condition x sub 0 . These upper bounds are related to the spectral radius (or the Perron-Frobenius eigenvalue) of the nonnegative matrix in which each element is the moment generating function of the corresponding element in the state-feedback matrix A sub 1 . In particular, we have Kingman's upper bound for GI GI 1 queue when the system is one-dimensional. We also show that some of these upper bounds can be achieved if A sub 1 is lower triangular. These bounds are applied to some commonly used systems to derive new results or strengthen known results.", "To facilitate the efficient support of quality of service (QoS) in next-generation wireless networks, it is essential to model a wireless channel in terms of connection-level QoS metrics such as data rate, delay, and delay-violation probability. However, the existing wireless channel models, i.e., physical-layer channel models, do not explicitly characterize a wireless channel in terms of these QoS metrics. In this paper, we propose and develop a link-layer channel model termed effective capacity (EC). In this approach, we first model a wireless link by two EC functions, namely, the probability of nonempty buffer, and the QoS exponent of a connection. Then, we propose a simple and efficient algorithm to estimate these EC functions. The physical-layer analogs of these two link-layer EC functions are the marginal distribution (e.g., Rayleigh-Ricean distribution) and the Doppler spectrum, respectively. The key advantages of the EC link-layer modeling and estimation are: 1) ease of translation into QoS guarantees, such as delay bounds; 2) simplicity of implementation; and 3) accuracy, and hence, efficiency in admission control and resource reservation. We illustrate the advantage of our approach with a set of simulation experiments, which show that the actual QoS metric is closely approximated by the QoS metric predicted by the EC link-layer model, under a wide range of conditions.", "We present two types of stability problems: 1) conditions for queueing networks that render bounded queue lengths and bounded delay for customers, and 2) conditions for queueing networks in which the queue length distribution of a queue has an exponential tail with rate spl theta . To answer these two types of stability problems, we introduce two new notions of traffic characterization: minimum envelope rate (MER) and MER with respect to spl theta . We also develop a set of rules for network operations such as superposition, input-output relation of a single queue, and routing. Specifically, we show that: 1) the MER of a superposition process is less than or equal to the sum of the MER of each process, 2) a queue is stable in the sense of bounded queue length if the MER of the input traffic is smaller than the capacity, 3) the MER of a departure process from a stable queue is less than or equal to that of the input process, and 4) the MER of a routed process from a departure process is less than or equal to the MER of the departure process multiplied by the MER of the routing process. Similar results hold for MER with respect to spl theta under a further assumption of independence. For single class networks with nonfeedforward routing, we provide a new method to show that similar stability results hold for such networks under the first come, first served policy. Moreover, when restricting to the family of two-state Markov modulated arrival processes, the notion of MER with respect to spl theta is shown to be equivalent to the recently developed notion of effective bandwidth in communication networks. >", "From the Publisher: Providing performance guarantees is one of the most important issues for future telecommunication networks. This book describes theoretical developments in performance guarantees for telecommunication networks from the last decade. Written for the benefit of graduate students and scientists interested in telecommunications-network performance this book consists of two parts." ] }
cs0511043
2952861886
We present Poseidon, a new anomaly based intrusion detection system. Poseidon is payload-based, and presents a two-tier architecture: the first stage consists of a Self-Organizing Map, while the second one is a modified PAYL system. Our benchmarks on the 1999 DARPA data set show a higher detection rate and lower number of false positives than PAYL and PHAD.
Cannady @cite_32 proposes a SOM-based IDS in which network packets are first classified according to nine features and then presented to the neural network. Attack traffic is generated using a security audit tool. The author extends this work in Cannady @cite_29 @cite_14 .
{ "cite_N": [ "@cite_29", "@cite_14", "@cite_32" ], "mid": [ "2411715191", "32779056", "1674877186" ], "abstract": [ "", "The timely and accurate detection of computer and network system intrusions has always been an elusive goal for system administrators and information security researchers. Existing intrusion detection approaches require either manual coding of new attacks in expert systems or the complete retraining of a neural network to improve analysis or learn new attacks. This paper presents a new approach to applying adaptive neural networks to intrusion detection that is capable of autonomously learning new attacks rapidly through the use of a modified reinforcement learning method that uses feedback from the protected system. The approach has been demonstrated to be extremely effective in learning new attacks, detecting previously learned attacks in a network data stream, and in autonomously improving its analysis over time using feedback from the protected system.", "Network intrusion detection systems (NIDS) are an important part of any network security architecture. They provide a layer of defense which monitors network traffic for predefined suspicious activity or patterns, and alert system administrators when potential hostile traffic is detected. Commercial NIDS have many differences, but Information Systems departments must face the commonalities that they share such as significant system footprint, complex deployment and high monetary cost. Snort was designed to address these issues." ] }
cs0511043
2952861886
We present Poseidon, a new anomaly based intrusion detection system. Poseidon is payload-based, and presents a two-tier architecture: the first stage consists of a Self-Organizing Map, while the second one is a modified PAYL system. Our benchmarks on the 1999 DARPA data set show a higher detection rate and lower number of false positives than PAYL and PHAD.
Zanero @cite_18 presents a two-tier payload-based system that combines a self-organizing map with a modified version of SmartSifter @cite_6 . While this architecture is similar to POSEIDON, a full comparison is not possible because the benchmarks in @cite_18 concern only the FTP service an no details are given about experiments execution. A two-tier architecture for intrusion detection is also outlined in Zanero and Savaresi @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_6" ], "mid": [ "1982304603", "", "2045064676" ], "abstract": [ "With the continuous evolution of the types of attacks against computer networks, traditional intrusion detection systems, based on pattern matching and static signatures, are increasingly limited by their need of an up-to-date and comprehensive knowledge base. Data mining techniques have been successfully applied in host-based intrusion detection. Applying data mining techniques on raw network data, however, is made difficult by the sheer size of the input; this is usually avoided by discarding the network packet contents.In this paper, we introduce a two-tier architecture to overcome this problem: the first tier is an unsupervised clustering algorithm which reduces the network packets payload to a tractable size. The second tier is a traditional anomaly detection algorithm, whose efficiency is improved by the availability of data on the packet payload content.", "", "Outlier detection is a fundamental issue in data mining, specifically in fraud detection, network intrusion detection, network monitoring, etc. SmartSifter is an outlier detection engine addressing this problem from the viewpoint of statistical learning theory. This paper provides a theoretical basis for SmartSifter and empirically demonstrates its effectiveness. SmartSifter detects outliers in an on-line process through the on-line unsupervised learning of a probabilistic model (using a finite mixture model) of the information source. Each time a datum is input SmartSifter employs an on-line discounting learning algorithm to learn the probabilistic model. A score is given to the datum based on the learned model with a high score indicating a high possibility of being a statistical outlier. The novel features of SmartSifter are: (1) it is adaptive to non-stationary sources of datas (2) a score has a clear statistical information-theoretic meanings (3) it is computationally inexpensives and (4) it can handle both categorical and continuous variables. An experimental application to network intrusion detection shows that SmartSifter was able to identify data with high scores that corresponded to attacks, with low computational costs. Further experimental application has identified a number of meaningful rare cases in actual health insurance pathology data from Australia's Health Insurance Commission." ] }
cs0511102
2122887675
Because a delay tolerant network (DTN) can often be partitioned, routing is a challenge. However, routing benefits considerably if one can take advantage of knowledge concerning node mobility. This paper addresses this problem with a generic algorithm based on the use of a high-dimensional Euclidean space, that we call MobySpace, constructed upon nodes' mobility patterns. We provide here an analysis and a large scale evaluation of this routing scheme in the context of ambient networking by replaying real mobility traces. The specific MobySpace evaluated is based on the frequency of visits of nodes to each possible location. We show that routing based on MobySpace can achieve good performance compared to that of a number of standard algorithms, especially for nodes that are present in the network a large portion of the time. We determine that the degree of homogeneity of node mobility patterns has a high impact on routing. And finally, we study the ability of nodes to learn their own mobility patterns.
Some work concerning routing in DTNs has been performed with scheduled contacts, such as the paper by @cite_28 that tries to improve the connectivity of an isolated village to the internet based on knowledge of when a low-earth orbiting relay satellite and a motor bike might be available to make the necessary connections. Also of interest, work on interplanetary networking @cite_18 @cite_26 uses predicted contacts such as the ones between planets within the framework of a DTN architecture.
{ "cite_N": [ "@cite_28", "@cite_18", "@cite_26" ], "mid": [ "2162076967", "2082199994", "2097625638" ], "abstract": [ "We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs.", "The developments in the space technologies are enabling the realization of deep space scientific missions such as Mars exploration. InterPlaNetary (IPN) Internet is expected to be the next step in the design and development of deep space networks as the Internet of the deep space planetary networks. However, there exist significant challenges to be addressed for the realization of this objective. Many researchers and several international organizations are currently engaged in defining and addressing these challenges and developing the required technologies for the realization of the InterPlaNetary Internet. In this paper, the current status of the research efforts to realize the InterPlaNetary Internet objective is captured. The communication architecture is presented, and the challenges posed by the several aspects of the InterPlaNetary Internet are introduced. The existing algorithms and protocols developed for each layer and the other related work are explored, and their shortcomings are pointed out along with the open research issues for the realization of the InterPlaNetary Internet. The objective of this survey is to motivate the researchers around the world to tackle these challenging problems and help to realize the InterPlaNetary Internet.", "Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a \"least common denominator\" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. We identify three fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and describe the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling. We also examine Internet infrastructure adaptations that might yield comparable performance but conclude that the simplicity of the DTN architecture promises easier deployment and extension." ] }
cs0510090
1553225280
To definite and compute differential invariants, like curvatures, for triangular meshes (or polyhedral surfaces) is a key problem in CAGD and the computer vision. The Gaussian curvature and the mean curvature are determined by the differential of the Gauss map of the underlying surface. The Gauss map assigns to each point in the surface the unit normal vector of the tangent plane to the surface at this point. We follow the ideas developed in Chen and Wu Chen2 (2004) and Wu, Chen and Chi Wu (2005) to describe a new and simple approach to estimate the differential of the Gauss map and curvatures from the viewpoint of the gradient and the centroid weights. This will give us a much better estimation of curvatures than Taubin's algorithm Taubin (1995).
Flynn and Jain @cite_9 (1989) used a suitable sphere passing through four vertices to estimate curvatures. Meek and Walton @cite_2 (2000) examined several methods and compared them with the discretization and interpolation method. Gatzke and Grim @cite_3 (2003) systematically analyzed the results of computation of curvatures of surfaces represented by triangular meshes and recommended the surface fitting methods. See also Petitjean @cite_7 (2002) for the surface fitting methods. @cite_5 (2003) employed the Gauss-Bonnet theorem to estimate the Gaussian curvatures and introduced the Laplace-Beltrami operator to approximate the mean curvature.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_2", "@cite_5" ], "mid": [ "", "2104776149", "2165995633", "2107059366", "1626653188" ], "abstract": [ "", "An empirical study of the accuracy of five different curvature estimation techniques, using synthetic range images and images obtained from three range sensors, is presented. The results obtained highlight the problems inherent in accurate estimation of curvatures, which are second-order quantities, and thus highly sensitive to noise contamination. The numerical curvature estimation methods are found to perform about as accurately as the analytic techniques, although ensemble estimates of overall surface curvature such as averages are unreliable unless trimmed estimates are used. The median proved to be the best estimator of location. As an exception, it is shown theoretically that zero curvature can be fairly reliably detected, with appropriate selection of threshold values. >", "This paper takes a systematic look at calculating the curvature of surfaces represented by triangular meshes. We have developed a suite of test cases for assessing the sensitivity of curvature calculations, to noise, mesh resolution, and mesh regularity. These tests are applied to existing discrete curvature approximation techniques and three common surface fitting methods (polynomials, radial basis functions and conics). We also introduce a modification to the standard parameterization technique. Finally, we examine the behaviour of the curvature calculation techniques in the context of segmentation.", "Approximations to the surface normal and to the Gaussian curvature of a smooth surface are often required when the surface is defined by a set of discrete points. The accuracy of an approximation can be measured using asymptotic analysis. The errors of several approximations to the surface normal and to the Gaussian curvature are compared. © 2000 Elsevier Science B.V. All rights reserved.", "This paper proposes a unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes. We present a consistent derivation of these first and second order differential properties using averaging Voronoi cells and the mixed Finite-Element Finite-Volume method, and compare them to existing formulations. Building upon previous work in discrete geometry, these operators are closely related to the continuous case, guaranteeing an appropriate extension from the continuous to the discrete setting: they respect most intrinsic properties of the continuous differential operators. We show that these estimates are optimal in accuracy under mild smoothness conditions, and demonstrate their numerical quality. We also present applications of these operators, such as mesh smoothing, enhancement, and quality checking, and show results of denoising in higher dimensions, such as for tensor images." ] }
physics0510151
1969383134
The knowledge of real-life traffic patterns is crucial for a good understanding and analysis of transportation systems. These data are quite rare. In this paper we propose an algorithm for extracting both the real physical topology and the network of traffic flows from timetables of public mass transportation systems. We apply this algorithm to timetables of three large transportation networks. This enables us to make a systematic comparison between three different approaches to construct a graph representation of a transportation network; the resulting graphs are fundamentally different. We also find that the real-life traffic pattern is very heterogeneous, in both space and traffic flow intensities, which makes it very difficult to approximate the node load with a number of topological estimators.
Another class of networks that can be constructed with the help of timetables are airport networks @cite_17 @cite_35 @cite_30 @cite_27 . There, the nodes are the airports, and edges are the flight connections. The weight of an edge reflects the traffic on this connection, which can be approximated by the number of flights that use it during one week. In this case, both the topology and the traffic information are given by timetables. This is because the routes of planes are not constrained to any physical infrastructure, as opposed to roads for cars or rail-tracks for trains. So there are no real'' links and shortcut'' links. In a sense all links are real, and the topologies in space ---of ---stops and in space ---of ---stations actually coincide.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_27", "@cite_17" ], "mid": [ "2116173498", "2100776472", "2130476447", "2149055390" ], "abstract": [ "We study networks that connect points in geographic space, such as transportation networks and the Internet. We find that there are strong signatures in these networks of topography and use patterns, giving the networks shapes that are quite distinct from one another and from non-geographic networks. We offer an explanation of these differences in terms of the costs and benefits of transportation and communication, and give a simple model based on the Monte Carlo optimization of these costs and benefits that reproduces well the qualitative features of the networks studied.", "The rapid worldwide spread of severe acute respiratory syndrome demonstrated the potential threat an infectious disease poses in a closely interconnected and interdependent world. Here we introduce a probabilistic model that describes the worldwide spread of infectious diseases and demonstrate that a forecast of the geographical spread of epidemics is indeed possible. This model combines a stochastic local infection dynamics among individuals with stochastic transport in a worldwide network, taking into account national and international civil aviation traffic. Our simulations of the severe acute respiratory syndrome outbreak are in surprisingly good agreement with published case reports. We show that the high degree of predictability is caused by the strong heterogeneity of the network. Our model can be used to predict the worldwide spread of future infectious diseases and to identify endangered regions in advance. The performance of different control strategies is analyzed, and our simulations show that a quick and focused reaction is essential to inhibiting the global spread of epidemics.", "We analyze the global structure of the worldwide air transportation network, a critical infrastructure with an enormous impact on local, national, and international economies. We find that the worldwide air transportation network is a scale-free small-world network. In contrast to the prediction of scale-free network models, however, we find that the most connected cities are not necessarily the most central, resulting in anomalous values of the centrality. We demonstrate that these anomalies arise because of the multicommunity structure of the network. We identify the communities in the air transportation network and show that the community structure cannot be explained solely based on geographical constraints and that geopolitical considerations have to be taken into account. We identify each city's global role based on its pattern of intercommunity and intracommunity connections, which enables us to obtain scale-specific representations of the network.", "Networked structures arise in a wide array of different contexts such as technological and transportation infrastructures, social phenomena, and biological systems. These highly interconnected systems have recently been the focus of a great deal of attention that has uncovered and characterized their topological complexity. Along with a complex topological structure, real networks display a large heterogeneity in the capacity and intensity of the connections. These features, however, have mainly not been considered in past studies where links are usually represented as binary states, i.e., either present or absent. Here, we study the scientific collaboration network and the world-wide air-transportation network, which are representative examples of social and large infrastructure systems, respectively. In both cases it is possible to assign to each edge of the graph a weight proportional to the intensity or capacity of the connections among the various elements of the network. We define appropriate metrics combining weighted and topological observables that enable us to characterize the complex statistical properties and heterogeneity of the actual strength of edges and vertices. This information allows us to investigate the correlations among weighted quantities and the underlying topological structure of the network. These results provide a better description of the hierarchies and organizational principles at the basis of the architecture of weighted networks." ] }
cs0510065
1619974530
This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations.
Most systems that provide anonymity are not interested in allowing to trace the user under any circumstances. , proxy servers, have not been designed to provide accountability. For mobile ad hoc networks, approaches exists that provide unconditional anonymity, again without any accountability @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2139284310" ], "abstract": [ "A mobile ad hoc network consists of mobile nodes that can move freely in an open environment. Communicating nodes in a wireless and mobile ad hoc network usually seek the help of other intermediate nodes to establish communication channels. In such an open environment, malicious intermediate nodes can be a threat to the security and or anonymity of the exchanged data between the mobile nodes. While data encryption can protect the content exchanged between nodes, routing information may reveal valuable information about end users and their relationships. The main purposes of this paper are to study the possibility of achieving anonymity in ad hoc networks, and propose an anonymous routing protocol, similar to onion routing concept used in wired networks. Our protocol includes a mechanism to establish a trust among mobile nodes while avoiding untrustworthy nodes during the route discovery process. The major objective of our protocol is to allow only trustworthy intermediate nodes to participate in the routing protocol without jeopardizing the anonymity of the communicating nodes. We present our scheme, and report on its performance using an extensive set of simulation set of experiments using ns-2 simulator. Our results indicate clearly that anonymity can be achieved in mobile ad hoc networks, and the additional overhead of our scheme to DSR is reasonably low when compared to a non-secure DSR ad hoc routing protocol." ] }
cs0510065
1619974530
This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations.
An area that requires both anonymity and accountability is agent systems ( @cite_19 ). Most of the security architectures for those systems do not provide any anonymity, e.g., @cite_1 , @cite_4 , @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_19", "@cite_1", "@cite_4" ], "mid": [ "1855767844", "118562019", "1482916605", "2071198236" ], "abstract": [ "Mobile agents have recently started being deployed in large-scale distributed systems. However, this new technology brings some security concerns of its own. In this work, we propose a security scheme for protecting mobile agent platforms in large-scale systems. This scheme comprises a mutual authentication protocol for the platforms involved, a mobile agent authenticator, and a method for generation of protection domains. It is based on SPKI SDSI chains of trust, and takes advantage of the flexibility of the SPKI SDSI certificate delegation infrastructure to provide decentralized authorization and authentication control.", "Mobile agent technology offers a new computing paradigm in which a program, in the form of a software agent, can suspend its execution on a host computer, transfer itself to another agent-enabled host on the network, and resume execution on the new host. The use of mobile code has a long history dating back to the use of remote job entry systems in the 1960's. Today's agent incarnations can be characterized in a number of ways ranging from simple distributed objects to highly organized software with embedded intelligence. As the sophistication of mobile software has increased over time, so too have the associated threats to security. This report provides an overview of the range of threats facing the designers of agent platforms and the developers of agentbased applications. The report also identifies generic security objectives, and a range of measures for countering the identified threats and fulfilling these security objectives.", "New portable computers and wireless communication technologies have significantly enhanced mobile computing. The emergence of network technology that supports user mobility and universal network access has prompted new requirements and concerns, especially in the aspects of access control and security. In this paper, we propose a new approach using authorisation agents for cross-domain access control in a mobile computing environment. Our framework consists of three main components, namely centralised authorisation servers, authorisation tokens and authorisation agents. An infrastructure of centralised authorisation servers and application servers from different domains is proposed for supporting trust propagation to mobile hosts instantaneously. While the authorisation token is a form of static capability, the authorisation agent on the client side can be regarded as a dynamic capability to provide the functionality in client-server interactions. It works collaboratively with remote servers to provide authorisation service with finer access granularity and higher flexibility.", "In mobile agent systems, program code together with some process state can autonomously migrate to new hosts. Despite its many practical benefits, mobile agent technology results in significant new security threats from malicious agents and hosts. In this paper, we propose a security architecture to achieve three goals: certification that a server has the authority to execute an agent on behalf of its sender; flexible selection of privileges, so that an agent arriving at a server may be given the privileges necessary to carry out the task for which it has come to the server; and state appraisal, to ensure that an agent has not become malicious as a consequence of alterations to its state. The architecture models the trust relations between the principals of mobile agent systems and includes authentication and authorization mechanisms." ] }
cs0510065
1619974530
This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations.
A different scheme that preserves anonymity is proposed in @cite_3 . The scheme is based on a credential system and offers an optional anonymity revocation. Its main idea is based on oblivious protocols, encryption circuits and the RSA assumption.
{ "cite_N": [ "@cite_3" ], "mid": [ "2165210192" ], "abstract": [ "A credential system is a system in which users can obtain credentials from organizations and demonstrate possession of these credentials. Such a system is anonymous when transactions carried out by the same user cannot be linked. An anonymous credential system is of significant practical relevance because it is the best means of providing privacy for users. In this paper we propose a practical anonymous credential system that is based on the strong RSA assumption and the decisional Diffie-Hellman assumption modulo a safe prime product and is considerably superior to existing ones: (1) We give the first practical solution that allows a user to unlinkably demonstrate possession of a credential as many times as necessary without involving the issuing organization. (2) To prevent misuse of anonymity, our scheme is the first to offer optional anonymity revocation for particular transactions. (3) Our scheme offers separability: all organizations can choose their cryptographic keys independently of each other. Moreover, we suggest more effective means of preventing users from sharing their credentials, by introducing all-or-nothing sharing: a user who allows a friend to use one of her credentials once, gives him the ability to use all of her credentials, i.e., taking over her identity. This is implemented by a new primitive, called circular encryption, which is of independent interest, and can be realized from any semantically secure cryptosystem in the random oracle model." ] }
math0509333
2020416768
A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed.
For multidimensional scalar ( @math ) conservation laws with arbitrary @math , @cite_5 (generalizing earlier work) shows that a global EEF solution exists, is unique, satisfies the VV condition as well, and is stable under @math perturbations of the initial data.
{ "cite_N": [ "@cite_5" ], "mid": [ "1992812527" ], "abstract": [ "In this paper we construct a theory of generalized solutions in the large of Cauchy's problem for the equations in the class of bounded measurable functions. We define the generalized solution and prove existence, uniqueness and stability theorems for this solution. To prove the existence theorem we apply the \"vanishing viscosity method\"; in this connection, we first study Cauchy's problem for the corresponding parabolic equation, and we derive a priori estimates of the modulus of continuity in of the solution of this problem which do not depend on small viscosity.Bibliography: 22 items." ] }
math0509333
2020416768
A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed.
@cite_26 proposes the EEF condition for scalar conservation laws ( @math ), proves that it is implied by the VV condition under some circumstances and notes that there is a large set of convex entropies. Apparently independently, @cite_5 obtained analogous results for systems. @cite_32 contains the first use of the term entropy condition'' for the EEF condition. Various forms of the EEF condition had been known and in use for special systems such as the Euler equations for a long time (e.g. by the name of Clausius-Duhem inequality ), especially as shock relations; however, the above references seem to be the first to define the general notion of strictly convex EEF pairs, to propose the EEF condition as a mathematical tool for arbitrary systems of conservation laws and to formulate it in the weak form rather than the special case .
{ "cite_N": [ "@cite_5", "@cite_26", "@cite_32" ], "mid": [ "1992812527", "", "205360716" ], "abstract": [ "In this paper we construct a theory of generalized solutions in the large of Cauchy's problem for the equations in the class of bounded measurable functions. We define the generalized solution and prove existence, uniqueness and stability theorems for this solution. To prove the existence theorem we apply the \"vanishing viscosity method\"; in this connection, we first study Cauchy's problem for the corresponding parabolic equation, and we derive a priori estimates of the modulus of continuity in of the solution of this problem which do not depend on small viscosity.Bibliography: 22 items.", "", "Publisher Summary This chapter provides an overview of shock waves and entropy. It describes systems of the first order partial differential equations in conservation form: ∂ t U + ∂ X F = 0, F = F(u). In many cases, all smooth solutions of the first order partial differential equations in conservation form satisfy an additional conservation law where U is a convex function of u. The chapter discusses that for all weak solutions of ∂ t u j +∂ x f j = 0, j=1,…, m, f j =f j (u 1 ,…, u m ), which are limits of solutions of modifications ∂ t u j +∂ x f j = 0, j=1,…, m, f j =f j (u 1 ,…, u m ) , by the introduction of various kinds of dissipation, satisfy the entropy inequality, that is, ∂ t U + ∂ x F≦ 0. The chapter also explains that for weak solutions, which contain discontinuities of moderate strength, ∂ t U + ∂ x F≦ 0 is equivalent to the usual shock condition involving the number of characteristics impinging on the shock. The chapter also describes all possible entropy conditions of ∂ t U + ∂ x F≦ 0 that can be associated to a given hyperbolic system of two conservation laws." ] }
math0509333
2020416768
A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed.
(TODO: mention that @cite_28 fig 30 p. 296 is our example if the wedge is replaced by stagnation air; see p. 345 Fig 69. Quote: All these and other mathematically possible flow patterns with a singular center Z are at our disposal for interpreting experimental evidence. Which, if any, of these possibilities occurs under given circumstances is a question that cannot possibly be decided within the framework of a theory with such a high degree of indeterminacy. Here we have a typical instance of a theory incomplete and oversimplified in its basic assumptions; only by going more deeply into the physical basis of our theory, i.e. by accounting for heat conduction and viscosity, can we hope to clarify completely the phenomena at a three-shock singularity. It may well be that the boundary layer which develops along the constant discontinuity line modifies the flow pattern sufficiently to account for the observed deviation; [...quote Liepmann paper]''
{ "cite_N": [ "@cite_28" ], "mid": [ "1532515344" ], "abstract": [ "Keywords: ecoulement : compressible ; ecoulement : supersonique ; onde de : choc Reference Record created on 2005-11-18, modified on 2016-08-08" ] }
physics0509217
2164680115
We present a model for the diffusion of management fads and other technologies which lack clear objective evidence about their merits. The choices made by non-Bayesian adopters reflect both their own evaluations and the social in°uence of their peers. We show, both analytically and computationally, that the dynamics lead to outcomes that appear to be deterministic in spite of being governed by a stochastic process. In other words, when the objective evidence about a technology is weak, the evolution of this process quickly settles down to a fraction of adopters that is not predetermined. When the objective evidence is strong, the proportion of adopters is determined by the quality of the evidence and the adopters'competence.
In this paper we propose a model that is consistent with all of Camerer's observations and so is an alternative to canonical herding models. Thus our agents exhibit normatively desirable and empirically plausible monotonicity properties: in particular, the more the social cues favor innovation A over B, the more likely it is that an agent will select A, ceteris paribus. Yet the reasoning that underlies such choices is adaptively rational rather than fully rational. Moreover, unlike many adaptive models of fads, the present model generates analytical solutions, not just computational ones. Many---perhaps most---adaptive models of fads are what has come to be called agent-based models'' and it is virtually a defining feature of such models that they be computational. (For a survey of agent-based models, including several applied to fads, see @cite_12 .)
{ "cite_N": [ "@cite_12" ], "mid": [ "2167951823" ], "abstract": [ "■ Abstract Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from “factors” to “actors” in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach." ] }
cs0509024
2951388382
In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision.
A more elaborate definition of a stable semantics was given by @cite_29 for programs with weight constraints and implemented by the well-known smodels system. In our language, weight constraints correspond to aggregate atoms build with the @math and @math aggregate relations. An extensive comparison between the @math -stable semantics and the stable semantics of weight constraints can be found in @cite_8 @cite_15 and will not be repeated here.
{ "cite_N": [ "@cite_29", "@cite_15", "@cite_8" ], "mid": [ "2011124182", "342706626", "" ], "abstract": [ "A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.", "Aggregates are functions that take sets as arguments. Examples are the function that maps a set to the number of its elements or the function which maps a set to its minimal element. Aggregates are frequently used in relational databases and have many applications in combinatorial search problems and knowledge representation. Aggregates are of particular importance for several extensions of logic programming which are used for declarative programming like Answer Set Programming, Abductive Logic Programming, and the logic of inductive definitions (ID-Logic). Aggregate atoms not only allow a broader class of problems to be represented in a natural way but also allow a more compact representation of problems which often leads to faster solving times. Extensions of specific semantics of logic programs with, in many cases, specific aggregate relations have been proposed before. The main contributions of this thesis are: (i) we extend all major semantics of logic programs: the least model semantics of definite logic programs, the standard model semantics of stratified programs, the Clark completion semantics, the well-founded semantics, the stable models semantics, and the three-valued stable semantics; (ii) our framework admits arbitrary aggregate relations in the bodies of rules. We follow a denotational approach in which a semantics is defined as a (set of) fixpoint(s) of an operator associated with a program. The main tool of this work is Approximation Theory. This is an algebraic theory which defines different types of fixpoints of an approximating operator associated with a logic program. All major semantics of a logic program correspond to specific types of fixpoints of an approximating operator introduced by Fitting. We study different approximating operators for aggregate programs and investigate the precision and complexity of the semantics generated by them. We study in detail one specific operator which extends the Fitting operator and whose semantics extends the three-valued stable semantics of logic programs without aggregates. We look at algorithms, complexity, transformations of aggregate atoms and programs, and an implementation in XSB Prolog.", "" ] }
cs0509024
2951388382
In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision.
A novel feature of the language of weight constraints was that it allows weight constraints to be present also in the head of the rules. This approach have been further developed in different directions. One line of research was to consider different variations and extensions of weight constraints like abstract constraints @cite_23 , monotone cardinality atoms @cite_10 or set constraints @cite_20 . Such constraint atoms correspond in a natural way to aggregate atoms. The stable semantics of these extensions is also defined in terms of lattice operators. However, since constraint atoms are allowed in the heads of rules, the operators become non-deterministic and the algebraic theory is quite different than the approximation theory we used in this work. However, all the semantics agree on the class of definite aggregate programs and its least model semantics. The equivalent of a definite logic program in @cite_20 is called a Horn SC-logic programs and such programs are also characterized by a unique model which is the least fixpoint of a deterministic monotone operator @math which is the equivalent of our @math operator.
{ "cite_N": [ "@cite_20", "@cite_10", "@cite_23" ], "mid": [ "1540263588", "1638553575", "1854994931" ], "abstract": [ "We investigate a generalization of weight-constraint programs with stable semantics, as implemented in the ASP solver smodels. Our programs admit atoms of the form ( X, F ) where X is a finite set of propositional atoms and ( F ) is an arbitrary family of subsets of X. We call such atoms set constaints and show that the concept of stable model can be generalized to programs admitting set constraints both in the bodies and the heads of clauses. Natural tools to investigate the fixpoint semantics for such programs are nondeterministic operators in complete lattices. We prove two fixpoint theorems for such operators.", "We investigate mca-programs, that is, logic programs with clauses built of monotone cardinality atoms of the form kX, where k is a non-negative integer and X is a finite set of propositional atoms. We develop a theory of mca-programs. We demonstrate that the operational concept of the one-step provability operator generalizes to mca-programs, but the generalization involves nondeterminism. Our main results show that the formalism of mca-programs is a common generalization of (1) normal logic programming with its semantics of models, supported models and stable models, (2) logic programming with cardinality atoms and with the semantics of stable models, as defined by Niemela, Simons and Soininen, and (3) of disjunctive logic programming with the possible-model semantics of Sakama and Inoue.", "We propose and study extensions of logic programming with constraints represented as generalized atoms of the form C(X), where X is a finite set of atoms and C is an abstract constraint (formally, a collection of sets of atoms). Atoms C(X) are satisfied by an interpretation (set of atoms) M, if M ∩ X ∈ C. We focus here on monotone constraints, that is, those collections C that are closed under the superset. They include, in particular, weight (or pseudo-boolean) constraints studied both by the logic programming and SAT communities. We show that key concepts of the theory of normal logic programs such as the one-step provability operator, the semantics of supported and stable models, as well as several of their properties including complexity results, can be lifted to such case." ] }
cs0509024
2951388382
In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision.
Another proposal for a stable semantics of disjunctive logic programs extended with aggregates was given in @cite_27 . In the sequel we investigate in more detail the relationship with this semantics to the family of @math -stable semantics defined earlier. First, we recall the definitions of the stable semantics of @cite_27 .
{ "cite_N": [ "@cite_27" ], "mid": [ "2105869173" ], "abstract": [ "The addition of aggregates has been one of the most relevant enhancements to the language of answer set programming (ASP). They strengthen the modeling power of ASP, in terms of concise problem representations. While many important problems can be encoded using nonrecursive aggregates, some relevant examples lend themselves for the use of recursive aggregates. Previous semantic definitions typically agree in the nonrecursive case, but the picture is less clear for recursion. Some proposals explicitly avoid recursive aggregates, most others differ, and many of them do not satisfy desirable criteria, such as minimality or coincidence with answer sets in the aggregate-free case." ] }
cs0509065
2952811798
For generalized Reed-Solomon codes, it has been proved GuruswamiVa05 that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code -- a property that practical codes do not usually possess. In this paper, we first presented a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Schmidt and Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector @math for Reed-Solomon @math , @math , cannot be a deep hole, whenever @math is a polynomial of degree @math for @math .
The pursuit of efficient decoding algorithms for Reed-Solomon codes has yielded intriguing results. If the radius of a Hamming ball centered at some received word is less than half the minimum distance, there can be at most one codeword in the Hamming ball. Finding this codeword is called unambiguous decoding . It can be efficiently solved, see @cite_6 for a simple algorithm.
{ "cite_N": [ "@cite_6" ], "mid": [ "1871606885" ], "abstract": [ "Error correction for polynomial block codes is achieved without prior evaluation of power sum symmetric functions. The received word R (z) is reduced mod G (z), the generator of the code and a function F (z) of error locator polynomial W(z), errata values Y and code dependent functions f(xi) of the error positions xi given by ##EQU1## is decomposed into a rational polynomial function N (z) W (z) for which deg (N (z) )<deg ( W (z) )<number of correctable errors. W (z) is the error locator polynomial, the roots of which are the errata locations X and Y, the correction to the received character is obtained from ##EQU2## evaluated at Xi using non-erased check symbols of R (z). Correction is carried out in a crossbar switch structure which recalls a stored copy of R (z) and corrects bits as specified by (Xi, Yi). Another embodiment interposes a matrix transform to transform the symbols of the received word so as to treat a selected set of symbols as erased checks and to present error location corrections directly to the crossbar. Only when changes occur in the pattern of errata is this error corrector apparatus required to operate and to redetermine the transform executed on incoming data R (z)." ] }
cs0509065
2952811798
For generalized Reed-Solomon codes, it has been proved GuruswamiVa05 that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code -- a property that practical codes do not usually possess. In this paper, we first presented a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Schmidt and Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector @math for Reed-Solomon @math , @math , cannot be a deep hole, whenever @math is a polynomial of degree @math for @math .
The question on decodability of Reed-Solomon codes has attracted attention recently, due to recent discoveries on the relationship between decoding Reed-Solomon codes and some number theoretical problems. Allowing exponential alphabets, Guruswami and Vardy proved that the maximum likelihood decoding is NP-complete. They essentially showed that deciding deep holes is co-NP-complete. When the evaluation set is precisely the whole field or @math , an NP-completeness result is hard to obtain, Cheng and Wan @cite_1 managed to prove that decoding problem of Reed-Solomon codes at certain radius is at least as hard as the discrete logarithm problem over finite fields. In this paper, we wish to establish an additional connection between decoding of standard Reed-Solomon codes and a classical number-theoretic problem -- that of determining the number of rational points on an algebraic hypersurface.
{ "cite_N": [ "@cite_1" ], "mid": [ "2130539706" ], "abstract": [ "For an error-correcting code and a distance bound, the list decoding problem is to compute all the codewords within a given distance to a received message. The bounded distance decoding problem is to find one codeword if there is at least one codeword within the given distance, or to output the empty set if there is not. Obviously the bounded distance decoding problem is not as hard as the list decoding problem. For a Reed-Solomon code [n, k] sup q , a simple counting argument shows that for any integer 0 0. We show that the discrete logarithm problem over F sub qh can be efficiently reduced by a randomized algorithm to the bounded distance decoding problem of the Reed-Solomon code [q, g - h] sub q with radius q - g. These results show that the decoding problems for the Reed-Solomon code are at least as hard as the discrete logarithm problem over finite fields. The main tools to obtain these results are an interesting connection between the problem of list-decoding of Reed-Solomon code and the problem of discrete logarithm over finite fields, and a generalization of Katz's theorem on representations of elements in an extension finite field by products of distinct linear factors." ] }
cs0508009
1688492802
We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks.
Influenced by the gaining popularity of wireless LANs in recent years, there are increasing interests on studying usage of wireless LANs. Several previous works @cite_8 , @cite_15 , @cite_5 have provided extensive study on wireless network usage statistics and made their traces available to the research community. Our work is built upon these understandings and traces.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "1975804880", "2002169759", "2160494326" ], "abstract": [ "Wireless Local Area Networks (WLANs) are now commonplace on many academic and corporate campuses. As \"Wi-Fi\" technology becomes ubiquitous, it is increasingly important to understand trends in the usage of these networks.This paper analyzes an extensive network trace from a mature 802.11 WLAN, including more than 550 access points and 7000 users over seventeen weeks. We employ several measurement techniques, including syslogs, telephone records, SNMP polling and tcpdump packet sniffing. This is the largest WLAN study to date, and the first to look at a large, mature WLAN and consider geographic mobility. We compare this trace to a trace taken after the network's initial deployment two years ago.We found that the applications used on the WLAN changed dramatically. Initial WLAN usage was dominated by Web traffic; our new trace shows significant increases in peer-to-peer, streaming multimedia, and voice over IP (VoIP) traffic. On-campus traffic now exceeds off-campus traffic, a reversal of the situation at the WLAN's initial deployment. Our study indicates that VoIP has been used little on the wireless network thus far, and most VoIP calls are made on the wired network. Most calls last less than a minute.We saw greater heterogeneity in the types of clients used, with more embedded wireless devices such as PDAs and mobile VoIP clients. We define a new metric for mobility, the \"session diameter.\" We use this metric to show that embedded devices have different mobility characteristics than laptops, and travel further and roam to more access points. Overall, users were surprisingly non-mobile, with half remaining close to home about 98 of the time.", "In this paper, we analyze the mobility patterns of users of wireless hand-held PDAs in a campus wireless network using an eleven week trace of wireless network activity. Our study has two goals. First, we characterize the high-level mobility and access patterns of hand-held PDA users and compare these characteristics to previous workload mobility studies focused on laptop users. Second, we develop two wireless network topology models for use in wireless mobility studies: an evolutionary topology model based on user proximity and a campus waypoint model that serves as a trace-based complement to the random waypoint model. We use our evolutionary topology model as a case study for preliminary evaluation of three ad hoc routing algorithms on the network topologies created by the access and mobility patterns of users of modern wireless PDAs. Based upon the mobility characteristics of our trace-based campus waypoint model, we find that commonly parameterized synthetic mobility models have overly aggressive mobility characteristics for scenarios where user movement is limited to walking. Mobility characteristics based on realistic models can have significant implications for evaluating systems designed for mobility. When evaluated using our evolutionary topology model, for example, popular ad hoc routing protocols were very successful at adapting to user mobility, and user mobility was not a key factor in their performance.", "Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws." ] }
cs0508009
1688492802
We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks.
With these traces available, more recent research works focus on modeling user behaviors in wireless LANs. In @cite_11 the authors propose models to describe traffic flows generated by wireless LAN users, which is a different focus to this paper. In the first part of this paper we focus more on identifying metrics that capture important characteristics of user association behaviors. We understand user associations as coarse-grained mobility at per access point granularity. Similar methodology has been used in @cite_8 and @cite_3 . In @cite_3 the authors propose a mobility model based on association session length distribution and AP preferences. However, there are also other important metrics that are not included, such as user on-off behavior and repetitive patterns. We add these metrics to provide a more complete description for user behaviors in wireless networks.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_11" ], "mid": [ "2160494326", "2127484926", "2133436197" ], "abstract": [ "Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws.", "The simulation of mobile networks calls for a mobility model to generate the trajectories of the mobile users (or nodes). It has been shown that the mobility model has a major influence on the behavior of the system. Therefore, using a realistic mobility model is important if we want to increase the confidence that simulations of mobile systems are meaningful in realistic settings. In this paper we present an executable mobility model that uses real-life mobility characteristics to generate mobility scenarios that can be used for network simulations. We present a structured framework for extracting the mobility characteristics from a WLAN trace, for processing the mobility characteristics to determine a parameter set for the mobility model, and for using a parameter set to generate mobility scenarios for simulations. To derive the parameters of the mobility' model, we measure the mobility' characteristics of users of a campus wireless network. Therefore, we call this model the WLAN mobility model Mobility-analysis confirms properties observed by other research groups. The validation shows that the WLAN model maps the real-world mobility' characteristics to the abstract world of network simulators with a very small error. For users that do not have the possibility to capture a WLAN trace, we explore the value space of the WLAN model parameters and show how different parameters sets influence the mobility of the simulated nodes.", "Several studies have recently been performed on wireless university campus networks, corporate and public networks. Yet little is known about the flow-level characterization in such networks. In this paper, we statistically characterize both static flows and roaming flows in a large campus wireless network using a recently-collected trace. For static flows, we take a two-tier approach to characterizing the flow arrivals, which results a Weibull regression model. We further discover that the static flow arrivals in spatial proximity show strong similarity. As for roaming flows, they can also be well characterized statistically.We explain the results by user behaviors and application demands, and further cross-validate the modeling results by three other traces. Finally, we use two examples to illustrate how to apply our models for performance evaluation in the wireless context." ] }
cs0508009
1688492802
We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks.
Recent research works on protocol design in wireless networks usually utilize synthetic, random mobility models for performance evaluation @cite_9 , such as random waypoint model or random walk model. MNs in such synthetic models are always on and homogeneous in their behavior. Both of these characteristics are not observed in real wireless traces. We argue that to better serve the purpose of testing new protocols, we need models that capture on-off and heterogeneous behavior we observed from the traces.
{ "cite_N": [ "@cite_9" ], "mid": [ "2148135143" ], "abstract": [ "In the performance evaluation of a protocol for an ad hoc network, the protocol should be tested under realistic conditions including, but not limited to, a sensible transmission range, limited buffer space for the storage of messages, representative data traffic models, and realistic movements of the mobile users (i.e., a mobility model). This paper is a survey of mobility models that are used in the simulations of ad hoc networks. We describe several mobility models that represent mobile nodes whose movements are independent of each other (i.e., entity mobility models) and several mobility models that represent mobile nodes whose movements are dependent on each other (i.e., group mobility models). The goal of this paper is to present a number of mobility models in order to offer researchers more informed choices when they are deciding upon a mobility model to use in their performance evaluations. Lastly, we present simulation results that illustrate the importance of choosing a mobility model in the simulation of an ad hoc network protocol. Specifically, we illustrate how the performance results of an ad hoc network protocol drastically change as a result of changing the mobility model simulated." ] }
cs0508132
2950600983
We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences.
The work presented in this paper is the natural continuation of the work we presented in @cite_33 , where we rely on prioritized default theories to express limited classes of preferences between trajectories---a strict subset of the preferences covered in this paper. This work is also influenced by other works on exploiting in planning (e.g., @cite_50 @cite_37 @cite_15 ), in which domain-specific knowledge is expressed as a constraint on the trajectories achieving the goal, and hence, is a hard constraint . In subsection , we discuss different approaches to planning with preferences which are directly related to our work. In Subsections -- we present works that are somewhat related to our work and can be used to develop alternative implementation for @math .
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_50", "@cite_15" ], "mid": [ "1997420616", "2020138862", "2141088850", "2154595228" ], "abstract": [ "Planning for extended goals in non-deterministic domains is one of the most significant and challenging planning problems. In spite of the recent results in this field, no work has proposed a language designed specifically for planning. As a consequence, it is still impossible to specify and plan for several classes of goals that are typical of real world applications, like for instance \"try to achieve a goal whenever possible\", or \"if you fail to achieve a goal, recover by trying to achieve something else\".We propose a new goal language that allows for capturing the intended meaning of these goals. We give a semantics to this language that is radically different from the usual semantics for extended goals, e.g., the semantics for LTL or CTL. Finally, we implement an algorithm for planning for extended goals expressed in this language, and experiment with it on a parametric domain.", "This paper shows how action theories, expressed in an extended version of the language , can be naturally encoded using Prioritized Default Theory. We also show how prioritized default theory can be extended to express preferences between rules. This extension provides a natural framework to introduce different types of preferences in action theories—preferences between actions and preferences between final states. In particular, we demonstrate how these preferences can be expressed within extended prioritized default theory. We also discuss how this framework can be implemented in terms of answer set programming.", "Over the years increasingly sophisticated planning algorithms have been developed. These have made for more efficient planners, but unfortunately these planners still suffer from combinatorial complexity even in simple domains. Theoretical results demonstrate that planning is in the worst case intractable. Nevertheless, planning in particular domains can often be made tractable by utilizing additional domain structure. In fact, it has long been acknowledged that domain independent planners need domain dependent information to help them plan effectively. In this work we present an approach for representing and utilizing domain specific control knowledge. In particular, we show how domain dependent search control knowledge can be represented in a temporal logic, and then utilized to effectively control a forward-chaining planner. There are a number of advantages to our approach, including a declarative semantics for the search control knowledge; a high degree of modularity (new search control knowledge can be added without affecting previous control knowledge); and an independence of this knowledge from the details of the planning algorithm. We have implemented our ideas in the TLPLAN system, and have been able to demonstrate its remarkable effectiveness in a wide range of planning domains.", "In this article we consider three different kinds of domain-dependent control knowledge (temporal, procedural and HTN-based) that are useful in planning. Our approach is declarative and relies on the language of logic programming with answer set semantics (AnsProlog*). AnsProlog* is designed to plan without control knowledge. We show how temporal, procedural and HTN-based control knowledge can be incorporated into AnsProlog* by the modular addition of a small number of domain-dependent rules, without the need to modify the planner. We formally prove the correctness of our planner, both in the absence and presence of the control knowledge. Finally, we perform some initial experimentation that demonstrates the potential reduction in planning time that can be achieved when procedural domain knowledge is used to solve planning problems with large plan length." ] }
cs0508132
2950600983
We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences.
introduced a framework for planning with action costs using logic programming @cite_3 . The focus of their proposal is to express certain classes of quantitative preferences. Each action is assigned an integer cost, and plans with the minimal cost are considered to be optimal. Costs can be either static or relative to the time step in which the action is executed. @cite_3 also presents the encoding of different preferences, such as shortest plan and the cheapest plan. Our approach also emphasizes the use of logic programming, but differs in several aspects. Here, we develop a for preference representation. Our language can express the preferences discussed in @cite_3 , but it is more high-level and flexible than the action costs approach. The approach in @cite_3 also does not allow the use of fully general dynamic preferences. On the other hand, while we only consider planning with complete information, @cite_3 deal with planning in the presence of incomplete information and non-deterministic actions.
{ "cite_N": [ "@cite_3" ], "mid": [ "1633032608" ], "abstract": [ "Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language κc, which extends the declarative planning language κ by action costs. κc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all plans (i.e., cheapest plans). As we demonstrate, this novel language allows for expressing some nontrivial planning tasks in a declarative way. Furthermore, it can be utilized for representing planning problems under other optimality criteria, such as computing \"shortest\" plans (with the least number of steps), and refinement combinations of cheapest and fastest plans. We study complexity aspects of the language κc and provide a transformation to logic programs, such that planning problems are solved via answer set programming. Furthermore, we report experimental results on selected problems. Our experience is encouraging that answer set planning may be a valuable approach to expressive planning systems in which intricate planning problems can be naturally specified and solved." ] }
cs0508132
2950600983
We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences.
Considerable effort has been invested in introducing preferences in logic programming. In @cite_27 preferences are expressed at the level of atoms and used for parsing disambiguation in logic grammars. Rule-level preferences have been used in various proposals for selection of preferred answer sets in answer set programming @cite_23 @cite_18 @cite_36 @cite_28 . Some of the existing answer set solvers include limited forms of (numerical) optimization capabilities. smodels @cite_26 offers the ability to associate to atoms and to compute answer sets that minimize or maximize the total weight. DLV @cite_47 provides the notion of , i.e., constraints of the form [ , , . : [ w : : : l] ] where @math is a numeric penalty for violating the constraint, and @math is a priority level. The total cost of violating constraints at each priority level is computed, and answer sets are compared to minimize total penalty (according to a lexicographic ordering based on priority levels).
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_36", "@cite_28", "@cite_27", "@cite_23", "@cite_47" ], "mid": [ "2174235632", "2011124182", "", "1563821244", "1986318362", "2124627636", "2106614716" ], "abstract": [ "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.", "A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.", "", "We are interested in semantical underpinnings for existing approaches to preference handling in extended logic programming (within the framework of answer set programming). As a starting point, we explore three different approaches that have been recently proposed in the literature. Because these approaches use rather different formal means, we furnish a series of uniform characterizations that allow us to gain insights into the relationships among these approaches. To be more precise, we provide different characterizations in terms of (i) fixpoints, (ii) order preservation, and (iii) translations into standard logic programs. While the two former provide semantics for logic programming with preference information, the latter furnishes implementation techniques for these approaches.", "The addition of preferences to normal logic programs is a convenient way to represent many aspects of default reasoning. If the derivation of an atom A1 is preferred to that of an atom A2, a preference rule can be defined so that A2 is derived only if A1 is not. Although such situations can be modelled directly using default negation, it is often easier to define preference rules than it is to add negation to the bodies of rules. As first noted by [Proc. Internat. Conf. on Logic Programming, 1995, pp. 731-746], for certain grammars, it may be easier to disambiguate parses using preferences than by enforcing disambiguation in the grammar rules themselves. In this paper we define a general fixed-point semantics for preference logic programs based on an embedding into the well-founded semantics, and discuss its features and relation to previous preference logic semantics. We then study how preference logic grammars are used in data standardization, the commercially important process of extracting useful information from poorly structured textual data. This process includes correcting misspellings and truncations that occur in data, extraction of relevant information via parsing, and correcting inconsistencies in the extracted information. The declarativity of Prolog offers natural advantages for data standardization, and a commercial standardizer has been implemented using Prolog. However, we show that the use of preference logic grammars allow construction of a much more powerful and declarative commercial standardizer, and discuss in detail how the use of the non-monotonic construct of preferences leads to improved commercial software.", "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "This paper presents an extension of Disjunctive Datalog (DATALOG sup V, spl sim ) by integrity constraints. These are of two types: strong, that is, classical integrity constraints and weak, that is, constraints that are satisfied if possible. While strong constraints must be satisfied, weak constraints express desiderata, that is, they may be violated-actually, their semantics tends to minimize the number of violated instances of weak constraints. Weak constraints may be ordered according to their importance to express different priority levels. As a result, the proposed language (call it, DATALOG sup V, spl sim ,c ) is well-suited to represent common sense reasoning and knowledge-based problems arising in different areas of computer science such as planning, graph theory optimizations, and abductive reasoning. The formal definition of the language is first given. The declarative semantics of DATALOG sup V, spl sim ,c is defined in a general way that allows us to put constraints on top of any existing (model-theoretic) semantics for DATALOG sup V, spl sim programs. Knowledge representation issues are then addressed and the complexity of reasoning on DATALOG sup V, spl sim ,c programs is carefully determined. An in-depth discussion on complexity and expressiveness of DATALOG sup V, spl sim ,c is finally reported. The discussion contrasts DATALOG sup V, spl sim ,c to DATALOG sup V, spl sim and highlights the significant increase in knowledge modeling ability carried out by constraints." ] }
math0506336
2952239370
The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem.
More than thirty years later, Crowe-Zweibel-Rosenbloom proved Eq. ) for @math on @math @cite_36 . They expressed a given continuous supermodular function @math on @math that vanishes on the boundary as the distribution function of a Borel measure @math , @math layer-cake representation
{ "cite_N": [ "@cite_36" ], "mid": [ "2053053770" ], "abstract": [ "Let ƒ, g be measurable non-negative functions on R, and let , ḡ be their equimeasurable symmetric decreasing rearrangements. Let F: R × R → R be continuous and suppose that the associated rectangle function defined by F(R) = F(a, c) + F(b, d) − F(a, d) − F(b, c) for R = [(x, y) ϵ R2¦ a ⩽ x ⩽ b, c ⩽ y ⩽ d], is non-negative. Then ∝ F(ƒ, g) dμ ⩽ ∝ F( , g) dμ, where μ is Lebesgue measure. The concept of equimeasurable rearrangement is also defined for functions on a more general class of measure spaces, and the inequality holds in the general case. If F(x, y) = −ϑ(x − y), where ϑ is convex and ϑ(0) = 0, then we obtain ∝ ϑ( − g) dμ ⩽ ∝ ϑ(ƒ − g) dμ. In particular, if ϑ(x) = ¦x¦p, 1 ⩽ p ⩽ +∞, then we find that the operator S:ƒ → is a contraction on Lp for 1 ⩽ p ⩽ +∞." ] }
math0506336
2952239370
The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem.
Carlier viewed maximizing the left hand side of Eq. ) for a given right hand side as an optimal transportation problem where the distribution functions of @math define mass distributions @math on @math , the joint distribution defines a transportation plan, and the functional represents the cost after multiplying by a minus sign @cite_35 . He showed that the functional achieves its maximum (i.e., the cost is minimized) when the joint distribution is concentrated on a curve in @math that is nondecreasing in all coordinate directions, and obtained Eq. ) as a corollary. His proof takes advantage of the dual problem of minimizing @math over @math , subject to the constraint that @math for all @math .
{ "cite_N": [ "@cite_35" ], "mid": [ "2183160839" ], "abstract": [ "We prove existence, uniqueness, duality results and give a characterization of optimal measure-preserving maps for a class of optimal transportation problems with several marginals with compact support in R under the requirement that the cost function satisfles a so-called monotonicity of order 2 condition. Explicit formulas for the minimizers are given and links with some rearrangement inequalities are sketched." ] }
math0506336
2952239370
The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem.
The Riesz inequality in Eq. ) is non-trivial even when @math is just a product of two functions. Ahlfors introduced two-point rearrangements to treat this case on @math @cite_22 , Baernstein-Taylor proved the corresponding result on @math @cite_19 , and Beckner noted that the proof remains valid on @math and @math @cite_9 . When @math is a product of @math functions, Eq. ) has applications to spectral invariants of heat kernels via the Trotter product formula @cite_15 . This case was settled by Friedberg-Luttinger @cite_13 , Burchard-Schmuckenschl "ager @cite_25 , and by Morpurgo, who proved Eq. ) more generally for integrands of the form with @math convex (Theorem 3.13 of @cite_8 ). In the above situations, equality cases have been determined @cite_26 @cite_21 @cite_25 @cite_8 . Almgren-Lieb used the technique of Crowe-Zweibel-Rosen -bloom to prove Eq. ) for @math @cite_5 . The special case where @math for some convex function @math was identified by Baernstein as a master inequality' from which many classical geometric inequalities can be derived quickly @cite_32 . Eq. ) for continuous supermodular integrands with @math is due to Draghici @cite_0 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_9", "@cite_21", "@cite_32", "@cite_0", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "1659004305", "1515452785", "2085996269", "", "1999441752", "", "1988493560", "2157142525", "2066606909", "", "2016665346", "2162337698" ], "abstract": [ "The equation dealt with in this paper is in three dimensions. It comes from minimizing the functional which, in turn, comes from an approximation to the Hartree-Fock theory of a plasma. It describes an electron trapped in its own hole. The interesting mathematical aspect of the problem is that & is not convex, and usual methods to show existence and uniqueness of the minimum do not apply. By using symmetric decreasing rearrangement inequalities we are able to prove existence and uniqueness (modulo translations) of a minimizing Φ. To prove uniqueness a strict form of the inequality, which we believe is new, is employed.", "", "", "", "where de denotes normalized surface measure, V is the conformal gradient and q = (2n) (n 2). A modern folklore theorem is that by taking the infinitedimensional limit of this inequality, one obtains the Gross logarithmic Sobolev inequality for Gaussian measure, which determines Nelson's hypercontractive estimates for the Hermite semigroup (see [8]). One observes using conformal invariance that the above inequality is equivalent to the sharp Sobolev inequality on Rn for which boundedness and extremal functions can be easily calculated using dilation invariance and geometric symmetrization. The roots here go back to Hardy and Littlewood. The advantage of casting the problem on the sphere is that the role of the constants is evident, and one is led immediately to the conjecture that this inequality should hold whenever possible (for example, 2 < q < 0o if n = 2). This is in fact true and will be demonstrated in Section 2. A clear question at this point is \"What is the situation in dimension 2?\" Two important arguments ([25], [26], [27]) dealt with this issue, both motivated by geometric variational problems. Because q goes to infinity for dimension 2, the appropriate function space is the exponential class. Responding in part", "", "We prove rearrangement inequalities for multiple integrals, using the polarization technique. Polarization refers to rearranging a function with respect to a hyperplane. Then we derive sharp inequalities for ratios of integrals of heat kernels of Schrodinger operators, using our polarization inequalities. These ratio inequalities imply inequalities for the partition functions and extend the results of R. Banuelos, P.J. Mendez-Hernandez and D. You.", "", "Method and optical driver aid for following a prescribed simulated motor vehicle driving schedule, for example, such as used in automobile exhaust emission control tests, wherein a moving driving cycle trace is projected onto a viewing screen and followed by an indicator means which is responsive to the motor vehicle speed and operation.", "", "", "We study bounds on the exit time of Brownian motion from a set in terms of its size and shape, and the relation of such bounds with isoperimetric inequalities. The first result is an upper bound for the distribution function of the exit time from a subset of a sphere or hyperbolic space of constant curvature in terms of the exit time from a disc of the same volume. This amounts to a rearrangement inequality for the Dirichlet heat kernel. To connect this inequality with the classical isoperimetric inequality, we derive a formula for the perimeter of a set in terms of the heat flow over the boundary. An auxiliary result generalizes Riesz' rearrangement inequality to multiple integrals." ] }
cs0506002
1589766768
study a collection of heterogeneous XML databases maintain- ing similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users applications wish to query the d atabases via one peer using its schema. We have recently developed Hep- ToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquain- tance with a small number of peer databases, possibly with dif- ferent schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a seman- tics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive frag- ment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demon- strate the utility and scalability of our ideas and algorith ms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed.
Schema-matching systems Automated techniques for schema matching (e.g. CUPID @cite_1 , @cite_17 @cite_13 ) are able to output elementary schema-level associations by exploiting linguistic features, context-dependent type matching, similarity functions etc. These associations could constitute the input of our rule inference algorithm if the user does not provide the arrows.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_17" ], "mid": [ "", "2139135093", "2008896880" ], "abstract": [ "", "Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems.", "Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing, and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches, and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component." ] }
cs0506002
1589766768
study a collection of heterogeneous XML databases maintain- ing similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users applications wish to query the d atabases via one peer using its schema. We have recently developed Hep- ToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquain- tance with a small number of peer databases, possibly with dif- ferent schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a seman- tics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive frag- ment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demon- strate the utility and scalability of our ideas and algorith ms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed.
P2P systems with non-conventional lookups Popular P2P networks, e.g. Kazaa, Gnutella, advertise simple lookup queries on file names. The idea of building full-fledged P2P DBMS is being considered in many works. Internet-scale database queries and functionalities @cite_19 as well as approximate range queries in P2P @cite_18 and XPath queries in small communities of peers @cite_21 have been extensively dealt with. All these works do not deal with reconciling schema heterogeneity. @cite_21 relies on a DHT-based network to address simple XPath queries, while @cite_30 realizes IR-style queries in an efficient P2P relational database.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_18", "@cite_21" ], "mid": [ "2099849300", "1558940048", "2145290781", "2120176307" ], "abstract": [ "We present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers' sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e., direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.", "In this paper, we address the problem of designing a scalable, accurate query processor for peer-to-peer filesharing and similar distributed keyword search systems. Using a globally-distributed monitoring infrastructure, we perform an extensive study of the Gnutella filesharing network, characterizing its topology, data and query workloads. We observe that Gnutella's query processing approach performs well for popular content, but quite poorly for rare items with few replicas. We then consider an alternate approach based on Distributed Hash Tables (DHTs). We describe our implementation of PIERSearch, a DHT-based system, and propose a hybrid system where Gnutella is used to locate popular items, and PIERSearch for handling rare items. We develop an analytical model of the two approaches, and use it in concert with our Gnutella traces to study the trade-off between query recall and system overhead of the hybrid system. We evaluate a variety of localized schemes for identifying items that are rare and worth handling via the DHT. Lastly, we show in a live deployment on fifty nodes on two continents that it nicely complements Gnutella in its ability to handle rare items.", "We present an architecture for a data sharing peer-to-peer system where the data is shared in the form of database relations. In general, peer-to-peer systems try to locate exactmatch data objects to simple user queries. Since peer-to-peer users generally tend to submit broad queries in order to find data of their interest, we develop a P2P data sharing architecture for computing approximate answers for the complex queries by finding data ranges that are similar to the user query. Thus this paper represents the first step towards solving the general range lookup problem over P2P systems instead of exact lookup operations.", "Querying large numbers of data sources is gaining importance due to increasing numbers of independent data providers. One of the key challenges is executing queries on all relevant information sources in a scalable fashion and retrieving fresh results. The key to scalability is to send queries only to the relevant servers and avoid wasting resources on data sources which will not provide any results. Thus, a catalog service, which would determine the relevant data sources given a query, is an essential component in efficiently processing queries in a distributed environment. This paper proposes a catalog framework which is distributed across the data sources themselves and does not require any central infrastructure. As new data sources become available, they automatically become part of the catalog service infrastructure, which allows scalability to large numbers of nodes. Furthermore, we propose techniques for workload adaptability. Using simulation and real-world data we show that our approach is valid and can scale to thousands of data sources." ] }
cs0506095
2950169286
Recursive loops in a logic program present a challenging problem to the PLP framework. On the one hand, they loop forever so that the PLP backward-chaining inferences would never stop. On the other hand, they generate cyclic influences, which are disallowed in Bayesian networks. Therefore, in existing PLP approaches logic programs with recursive loops are considered to be problematic and thus are excluded. In this paper, we propose an approach that makes use of recursive loops to build a stationary dynamic Bayesian network. Our work stems from an observation that recursive loops in a logic program imply a time sequence and thus can be used to model a stationary dynamic Bayesian network without using explicit time parameters. We introduce a Bayesian knowledge base with logic clauses of the form @math , which naturally represents the knowledge that the @math s have direct influences on @math in the context @math under the type constraints @math . We then use the well-founded model of a logic program to define the direct influence relation and apply SLG-resolution to compute the space of random variables together with their parental connections. We introduce a novel notion of influence clauses, based on which a declarative semantics for a Bayesian knowledge base is established and algorithms for building a two-slice dynamic Bayesian network from a logic program are developed.
Third, most importantly PKB has no mechanism for handling cyclic influences. In PKB, cyclic influences are defined to be inconsistent (see Definition 9 of the paper @cite_10 ) and thus are excluded (PKB excludes cyclic influences by requiring its programs be acyclic). In BKB, however, cyclic influences are interpreted as feedbacks, thus implying a time sequence. This allows us to derive a stationary DBN from a logic program with recursive loops.
{ "cite_N": [ "@cite_10" ], "mid": [ "2000805332" ], "abstract": [ "We define a language for representing context-sensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure that takes a query Q and a set of evidence E and constructs a Bayesian network to compute P(Q¦E). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach." ] }
cs0505011
1644495374
As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use.
Moving application windows among various displays has been the focus of research in multiple ubiquitous computing environments. In i-Land, a room with an interactive electronic wall (DynaWall), computer-enhanced chairs, and an interactive table, three methods were introduced for moving application windows on the DynaWall. @cite_6 @cite_3 Two of these methods, shuffling and throwing, are implemented using gestures. Shuffling is done by drawing a quick left or right stroke above the title bar of a window. This will move the window a distance equal to the width of the window in the gestured direction. Throwing is done by making a short gesture backward, then a longer gesture forward. This will move the window a distance proportional to the ratio between the backward and forward movement. The throwing action requires practice because there is no clear indication of how far something will move prior to using it. The final method for moving windows in i-Land is taking. If a user's hand is placed on a window for approximately half a second, that window shrinks into the size of an icon. The next time the user touches any display, the window will grow behind the hand back to its original size.
{ "cite_N": [ "@cite_3", "@cite_6" ], "mid": [ "1530394184", "2094982166" ], "abstract": [ "Publisher Summary This chapter presents an overview of usability inspection methods. Usability inspection is the generic name for a set of methods based on having evaluators inspect or examine usability-related aspects of a user interface. Usability inspectors can be usability specialists, but they can also be software development consultants with special expertise, end users with content or task knowledge, or other types of professionals. The different inspection methods have slightly different goals, but normally, usability inspection is intended as a way of evaluating user interface designs. In usability inspection, the evaluation of the user interface is based on the considered judgment of the inspector(s). The individual inspection methods vary as to how this judgment is derived and on what evaluative criteria inspectors are expected to base their judgments. Typically, a usability inspection is aimed at finding usability problems in an existing user interface design, and then using these problems to make recommendations for fixing the problems and improving the usability of the design. This means that usability inspections are normally used at the stage in the usability engineering cycle when a user interface design has been generated and its usability for users needs to be evaluated.", "We describe the i-LAND environment which constitutes an exampleof our vision of the workspaces of the future, in this casesupporting cooperative work of dynamic teams with changing needs.i-LAND requires and provides new forms of human-computerinteraction and new forms of computer-supported cooperative work.Its design is based on an integration of information andarchitectural spaces, implications of new work practices and anempirical requirements study informing our design. i-LAND consistsof several roomware components, i.e. computer-aug- mented objectsintegrating room elements with information technology. We presentthe current realization of i-LAND in terms of an interactiveelectronic wall, an interactive table, two computer-enhancedchairs, and two bridges for the Passage-mechanism. This iscomplemented by the description of the creativity supportapplication and the technological infrastructure. The paper isaccompanied by a video figure in the CHI99 video program." ] }
cs0505011
1644495374
As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use.
In Stanford's iRoom, the PointRight system allows users to use a single mouse and keyboard to control multiple dis -plays. @cite_0 Changing displays is accomplished by simply moving the cursor off the edge of a screen. Currently, iRoom does not move applications across displays, but this mouse technique could be extended to dragging application windows as well.
{ "cite_N": [ "@cite_0" ], "mid": [ "2006563349" ], "abstract": [ "We describe the design of and experience with PointRight, a peer-to-peer pointer and keyboard redirection system that operates in multi-machine, multi-user environments. PointRight employs a geometric model for redirecting input across screens driven by multiple independent machines and operating systems. It was created for interactive workspaces that include large, shared displays and individual laptops, but is a general tool that supports many different configurations and modes of use. Although previous systems have provided for re-routing pointer and keyboard control, in this paper we present a more general and flexible system, along with an analysis of the types of re-binding that must be handled by any pointer redirection system This paper describes the system, the ways in which it has been used, and the lessons that have been learned from its use over the last two years." ] }
cs0505011
1644495374
As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use.
Another approach for manipulating objects (text, icons and files) on a digital whiteboard is Pick-and-Drop''. @cite_1 Using Pick-and-Drop, the user can move an object by selecting it on a screen with a stylus (a small animation is provided where the object is lifted and a shadow of the object appears) then placing it on another screen by touching the desired screen with the again. The benefits of this approach include a more tangible copy paste buffer and a more direct approach than using FTP or other file transfer techniques.
{ "cite_N": [ "@cite_1" ], "mid": [ "2108715885" ], "abstract": [ "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment." ] }
cs0504099
1836465448
The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity.
Throughout this paper, we refer to the work of Gupta and Kumar on the capacity of random ad hoc networks @cite_4 . In this work, the authors assume a simplified link layer model in which each packet reception is successful if the receiver has an SINR of at least @math . The authors assume that each packet is decoded at every hop along the path from source to destination. No co-operative communication strategy is used, and interference signal from other simultaneous transmissions is treated just like noise. For this communication model, the authors propose a routing and scheduling strategy, and show that a per-node throughput of @math can be achieved.
{ "cite_N": [ "@cite_4" ], "mid": [ "2137775453" ], "abstract": [ "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ] }
cs0504099
1836465448
The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity.
In @cite_9 , the authors discuss the limitations of the work in @cite_4 , by taking a network information theoretic approach. The authors discuss how several co-operative strategies such as interference cancellation, network coding, etc. could be used to improve the throughput. However these tools cannot be exploited fully with the current technology which relies on point-to-point coding, and treats all forms of interference as noise. The authors also discuss how the problem of determining the network capacity from an information theoretic view-point is a difficult problem, since even the capacity of a three node relay network is unknown. In Theorem 3.6 in @cite_9 , the authors determine the same bound on the capacity of a random network as obtained in @cite_4 .
{ "cite_N": [ "@cite_9", "@cite_4" ], "mid": [ "2162180430", "2137775453" ], "abstract": [ "How much information can be carried over a wireless network with a multiplicity of nodes, and how should the nodes cooperate to transfer information? To study these questions, we formulate a model of wireless networks that particularly takes into account the distances between nodes, and the resulting attenuation of radio signals, and study a performance measure that weights information by the distance over which it is transported. Consider a network with the following features. I) n nodes located on a plane, with minimum separation distance spl rho sub min >0. II) A simplistic model of signal attenuation e sup - spl gamma spl rho spl rho sup spl delta over a distance spl rho , where spl gamma spl ges 0 is the absorption constant (usually positive, unless over a vacuum), and spl delta >0 is the path loss exponent. III) All receptions subject to additive Gaussian noise of variance spl sigma sup 2 . The performance measure we mainly, but not exclusively, study is the transport capacity C sub T :=sup spl Sigma on sub spl lscr =1 sup m R sub spl lscr spl middot spl rho sub spl lscr , where the supremum is taken over m, and vectors (R sub 1 ,R sub 2 ,...,R sub m ) of feasible rates for m source-destination pairs, and spl rho sub spl lscr is the distance between the spl lscr th source and its destination. It is the supremum distance-weighted sum of rates that the wireless network can deliver. We show that there is a dichotomy between the cases of relatively high and relatively low attenuation. When spl gamma >0 or spl delta >3, the relatively high attenuation case, the transport capacity is bounded by a constant multiple of the sum of the transmit powers of the nodes in the network. However, when spl gamma =0 and spl delta <3 2, the low-attenuation case, we show that there exist networks that can provide unbounded transport capacity for fixed total power, yielding zero energy priced communication. Examples show that nodes can profitably cooperate over large distances using coherence and multiuser estimation when the attenuation is low. These results are established by developing a coding scheme and an achievable rate for Gaussian multiple-relay channels, a result that may be of interest in its own right.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ] }
cs0504099
1836465448
The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity.
However, just as in @cite_4 , all the above mentioned works assume that over each link a certain non-zero rate can be achieved. They do not take into account the fact that in reality, such a rate is achieved with a probability of bit error arbitrarily close (but not equal) to one . Once the coding and modulation scheme is fixed, the function corresponding to the probability of bit error is also fixed.
{ "cite_N": [ "@cite_4" ], "mid": [ "2137775453" ], "abstract": [ "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ] }
cs0504045
2952143109
Internet worms have become a widespread threat to system and network operations. In order to fight them more efficiently, it is necessary to analyze newly discovered worms and attack patterns. This paper shows how techniques based on Kolmogorov Complexity can help in the analysis of internet worms and network traffic. Using compression, different species of worms can be clustered by type. This allows us to determine whether an unknown worm binary could in fact be a later version of an existing worm in an extremely simple, automated, manner. This may become a useful tool in the initial analysis of malicious binaries. Furthermore, compression can also be useful to distinguish different types of network traffic and can thus help to detect traffic anomalies: Certain anomalies may be detected by looking at the compressibility of a network session alone. We furthermore show how to use compression to detect malicious network sessions that are very similar to known intrusion attempts. This technique could become a useful tool to detect new variations of an attack and thus help to prevent IDS evasion. We provide two new plugins for Snort which demonstrate both approaches.
Evans and Barnett @cite_10 compare the complexity of legal FTP traffic to the complexity of attacks against FTP servers. To achieve this they analyzed the headers of legal and illegal FTP traffic. For this they gathered several hundred bytes of good and bad traffic and compressed it using compress. Our approach differs in that we use the entire packet or even entire TCP sessions. We use this as we believe that in the real world, it is hard to collect several hundred bytes of bad traffic from a single attack session using headers alone. Attacks exploiting vulnerabilities in a server are often very short and will not cause any other malicious traffic on the same port. This is especially the case in non-interactive protocols such as HTTP where all interactions consist of a request and reply only.
{ "cite_N": [ "@cite_10" ], "mid": [ "1531318324" ], "abstract": [ "The problem of network security is approached from the point of view of Kolmogorov complexity (see Evans. S, et al, Proc. DARPA Inf. Survivability Conf. & Exposition II, vol 2. p.322-33, 2001). The principle of conservation of complexity is utilized to identify healthy complexity norms objectively and detect attacks via deviation of these norms under TCP IP. Observed complexity changes that fall within expected hounds are indicators of system health, while complexity changes outside the expected bounds for normal protocol and application use are indicators of system fault or attack. Experimental results using FTP normal and attack sessions are presented." ] }
cs0504045
2952143109
Internet worms have become a widespread threat to system and network operations. In order to fight them more efficiently, it is necessary to analyze newly discovered worms and attack patterns. This paper shows how techniques based on Kolmogorov Complexity can help in the analysis of internet worms and network traffic. Using compression, different species of worms can be clustered by type. This allows us to determine whether an unknown worm binary could in fact be a later version of an existing worm in an extremely simple, automated, manner. This may become a useful tool in the initial analysis of malicious binaries. Furthermore, compression can also be useful to distinguish different types of network traffic and can thus help to detect traffic anomalies: Certain anomalies may be detected by looking at the compressibility of a network session alone. We furthermore show how to use compression to detect malicious network sessions that are very similar to known intrusion attempts. This technique could become a useful tool to detect new variations of an attack and thus help to prevent IDS evasion. We provide two new plugins for Snort which demonstrate both approaches.
Kulkarni, Evans and Barnett @cite_7 also try to track down denial of service attacks using Kolmogorov complexity. They now estimate the Kolmogorov complexity by computing an estimate of the entropy of 1's contained in the packet. They then track the complexity over time using the method of a complexity differential. For this they sample certain packets from a single flow and then compute the complexity differential once. Here, we always use compression and do not aim to detect DDOS attacks.
{ "cite_N": [ "@cite_7" ], "mid": [ "2005406777" ], "abstract": [ "This paper describes an approach to detecting distributed denial of service (DDoS) attacks that is based on fundamentals of Information Theory, specifically Kolmogorov Complexity. A theorem derived using principles of Kolmogorov Complexity states that the joint complexity measure of random strings is lower than the sum of the complexities of the individual strings when the strings exhibit some correlation. Furthermore, the joint complexity measure varies inversely with the amount of correlation. We propose a distributed active network-based algorithm that exploits this property to correlate arbitrary traffic flows in the network to detect possible denial-of-service attacks. One of the strengths of this algorithm is that it does not require special filtering rules and hence it can be used to detect any type of DDoS attack. We implement and investigate the performance of the algorithm in an active network. Our results show that DDoS attacks can be detected in a manner that is not sensitive to legitimate background traffic." ] }
cs0504063
2950442545
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
Menczer @cite_30 describes some disadvantages of current Web search engines on the dynamic Web, e.g., the low ratio of fresh or relevant documents. He proposes to complement the search engines with intelligent crawlers, or web mining agents to overcome those disadvantages. Search engines take static snapshots of the Web with relatively large time intervals between two snapshots. Intelligent web mining agents are different: they can find online the required recent information and may evolve intelligent behavior by exploiting the Web linkage and textual information.
{ "cite_N": [ "@cite_30" ], "mid": [ "2001834587" ], "abstract": [ "While search engines have become the major decision support tools for the Internet, there is a growing disparity between the image of the World Wide Web stored in search engine repositories and the actual dynamic, distributed nature of Web data. We propose to attack this problem using an adaptive population of intelligent agents mining the Web online at query time. We discuss the benefits and shortcomings of using dynamic search strategies versus the traditional static methods in which search and retrieval are disjoint. This paper presents a public Web intelligence tool called MySpiders, a threaded multiagent system designed for information discovery. The performance of the system is evaluated by comparing its effectiveness in locating recent, relevant documents with that of search engines. We present results suggesting that augmenting search engines with adaptive populations of intelligent search agents can lead to a significant competitive advantage. We also discuss some of the challenges of evaluating such a system on current Web data, introduce three novel metrics for this purpose, and outline some of the lessons learned in the process." ] }
cs0504063
2950442545
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
Risvik and Michelsen @cite_0 mention that because of the exponential growth of the Web there is an ever increasing need for more intelligent, (topic-)specific algorithms for crawling, like focused crawling and document classification. With these algorithms crawlers and search engines can operate more efficiently in a topically limited document space. The authors also state that in such vertical regions the dynamics of the Web pages is more homogenous.
{ "cite_N": [ "@cite_0" ], "mid": [ "2046862025" ], "abstract": [ "Abstract In this paper we study several dimensions of Web dynamics in the context of large-scale Internet search engines. Both growth and update dynamics clearly represent big challenges for search engines. We show how the problems arise in all components of a reference search engine model. Furthermore, we use the FAST Search Engine architecture as a case study for showing some possible solutions for Web dynamics and search engines. The focus is to demonstrate solutions that work in practice for real systems. The service is running live at www.alltheweb.com and major portals worldwide with more than 30 million queries a day, about 700 million full-text documents, a crawl base of 1.8 billion documents, updated every 11 days, at a rate of 400 documents second. We discuss future evolution of the Web, and some important issues for search engines will be scheduling and query execution as well as increasingly heterogeneous architectures to handle the dynamic Web." ] }
cs0504063
2950442545
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
Menczer @cite_30 also introduces a recency metric which is 1 if all of the documents are recent (i.e., not changed after the last download) and goes to 0 as downloaded documents are getting more and more obsolete. Trivially immediately after a few minutes run of an online crawler the value of this metric will be 1, while the value for the search engine will be lower.
{ "cite_N": [ "@cite_30" ], "mid": [ "2001834587" ], "abstract": [ "While search engines have become the major decision support tools for the Internet, there is a growing disparity between the image of the World Wide Web stored in search engine repositories and the actual dynamic, distributed nature of Web data. We propose to attack this problem using an adaptive population of intelligent agents mining the Web online at query time. We discuss the benefits and shortcomings of using dynamic search strategies versus the traditional static methods in which search and retrieval are disjoint. This paper presents a public Web intelligence tool called MySpiders, a threaded multiagent system designed for information discovery. The performance of the system is evaluated by comparing its effectiveness in locating recent, relevant documents with that of search engines. We present results suggesting that augmenting search engines with adaptive populations of intelligent search agents can lead to a significant competitive advantage. We also discuss some of the challenges of evaluating such a system on current Web data, introduce three novel metrics for this purpose, and outline some of the lessons learned in the process." ] }
cs0504063
2950442545
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
@cite_31 present a mathematical crawler model in which the number of obsolete pages can be minimized with a nonlinear equation system. They solved the nonlinear equations with different parameter settings on realistic model data. Their model uses different buckets for documents having different change rates therefore does not need any theoretical model about the change rate of pages. The main limitations of this work are the following:
{ "cite_N": [ "@cite_31" ], "mid": [ "2018928332" ], "abstract": [ "This paper outlines the design of a web crawler implemented for IBM Almaden's WebFountain project and describes an optimization model for controlling the crawl strategy. This crawler is scalable and incremental. The model makes no assumptions about the statistical behaviour of web page changes, but rather uses an adaptive approach to maintain data on actual change rates which are in turn used as inputs for the optimization. Computational results with simulated but realistic data show that there is no magic bullet' different, but equally plausible, objectives lead to con icting optimal' strategies. However, we nd that there are compromise objectives which lead to good strategies that are robust against a number of criteria." ] }
cs0504063
2950442545
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
by solving the nonlinear equations the content of web pages can not be taken into consideration. The model can not be extended easily to (topic-)specific crawlers, which would be highly advantageous on the exponentially growing web @cite_9 , @cite_0 , @cite_30 . the rapidly changing documents (like on news sites) are not considered to be in any bucket, therefore increasingly important parts of the web are disclosed from the searches.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_30" ], "mid": [ "2046862025", "", "2001834587" ], "abstract": [ "Abstract In this paper we study several dimensions of Web dynamics in the context of large-scale Internet search engines. Both growth and update dynamics clearly represent big challenges for search engines. We show how the problems arise in all components of a reference search engine model. Furthermore, we use the FAST Search Engine architecture as a case study for showing some possible solutions for Web dynamics and search engines. The focus is to demonstrate solutions that work in practice for real systems. The service is running live at www.alltheweb.com and major portals worldwide with more than 30 million queries a day, about 700 million full-text documents, a crawl base of 1.8 billion documents, updated every 11 days, at a rate of 400 documents second. We discuss future evolution of the Web, and some important issues for search engines will be scheduling and query execution as well as increasingly heterogeneous architectures to handle the dynamic Web.", "", "While search engines have become the major decision support tools for the Internet, there is a growing disparity between the image of the World Wide Web stored in search engine repositories and the actual dynamic, distributed nature of Web data. We propose to attack this problem using an adaptive population of intelligent agents mining the Web online at query time. We discuss the benefits and shortcomings of using dynamic search strategies versus the traditional static methods in which search and retrieval are disjoint. This paper presents a public Web intelligence tool called MySpiders, a threaded multiagent system designed for information discovery. The performance of the system is evaluated by comparing its effectiveness in locating recent, relevant documents with that of search engines. We present results suggesting that augmenting search engines with adaptive populations of intelligent search agents can lead to a significant competitive advantage. We also discuss some of the challenges of evaluating such a system on current Web data, introduce three novel metrics for this purpose, and outline some of the lessons learned in the process." ] }
cs0504101
2493383280
We identify a new class of hard 3-SAT instances, namely a random 3-SAT problems having exactly one solution and as few clauses as possible. It is numerically shown that the running time of complete methods as well as of local search algorithms for such problems is larger than for random instances around the phase transition point. We therefore provide instances with an exponential complexity in the so-called easy'' region, below the critical value of m n. This puts a new light on the connection between the phase transition phenomenon and NP-completeness.
Most of the studies of random 3-SAT ensemble have been concerned with the computational cost at a constant @math as a function of the ratio @math , where the characteristic phase transition-like curve is observed. This is in a way surprising because for the computational complexity (and also for practical applications) it is the scaling of running time as the problem size increases which is important, i.e. changing @math at fixed @math . Exponential scaling with @math has been numerically observed near the critical point @cite_14 @cite_5 for random 3-SAT as well as above it (albeit with a smaller exponent). Recently the scaling with @math has been studied and the transition from polynomial to exponential complexity has been observed below @math @cite_41 , again for random 3-SAT.
{ "cite_N": [ "@cite_41", "@cite_5", "@cite_14" ], "mid": [ "", "2155218845", "1561608403" ], "abstract": [ "", "Abstract Determining whether a propositional theory is satisfiable is a prototypical example of an NP-complete problem. Further, a large number of problems that occur in knowledge-representation, learning, planning, and other areas of AI are essentially satisfiability problems. This paper reports on the most extensive set of experiments to date on the location and nature of the crossover point in satisfiability problems. These experiments generally confirm previous results with two notable exceptions. First, we have found that neither of the functions previously proposed accurately models the location of the crossover point. Second, we have found no evidence of any hard problems in the under-constrained region. In fact the hardest problems found in the under-constrained region were many times easier than the easiest unsatisfiable problems found in the neighborhood of the crossover point. We offer explanations for these apparent contradictions of previous results.", "Determining whether a propositional theory is satisfiable is a prototypical example of an NP-complete problem. Further, a large number of problems that occur in knowledge representation, learning, planning, and other areas of AI are essentially satisfiability problems. This paper reports on a series of experiments to determine the location of the crossover point -- the point at which half the randomly generated propositional theories with a given number of variables and given number of clauses are satisfiable -- and to assess the relationship of the crossover point to the difficulty of determining satisfiability. We have found empirically that, for 3-SAT, the number of clauses at the crossover point is a linear function of the number of variables. This result is of theoretical interest since it is not clear why such a linear relationship should exist, but it is also of practical interest since recent experiments [ 92; 91] indicate that the most computationally difficult problems tend to be found near the crossover point. We have also found that for random 3-SAT problems below the crossover point, the average time complexity of satisfiability problems seems empirically to grow linearly with problem size. At and above the crossover point the complexity seems to grow exponentially, but the rate of growth seems to be greatest near the crossover point." ] }
cs0504101
2493383280
We identify a new class of hard 3-SAT instances, namely a random 3-SAT problems having exactly one solution and as few clauses as possible. It is numerically shown that the running time of complete methods as well as of local search algorithms for such problems is larger than for random instances around the phase transition point. We therefore provide instances with an exponential complexity in the so-called easy'' region, below the critical value of m n. This puts a new light on the connection between the phase transition phenomenon and NP-completeness.
There has been numerical evidence @cite_29 @cite_30 @cite_11 that below @math short instances of 3-SAT as well as of graph coloring @cite_51 can be hard. With respect to the formula size an interesting rigorous result is @cite_48 @cite_21 that an ordered DPLL algorithm needs an exponential time @math to find a resolution proof of an unsatisfiable 3-SAT instance. Note that the coefficient of the exponential growth increases with decreasing @math , i.e. short formulas are harder. For our ensemble of single-solution formulas we will find the same result.
{ "cite_N": [ "@cite_30", "@cite_48", "@cite_29", "@cite_21", "@cite_51", "@cite_11" ], "mid": [ "2137154807", "2088616860", "", "2126420408", "2002627038", "" ], "abstract": [ "We present a detailed experimental investigation of the easy-hard-easy phase transition for randomly generated instances of satisfiability problems. Problems in the hard part of the phase transition have been extensively used for benchmarking satisfiability algorithms. This study demonstrates that problem classes and regions of the phase transition previously thought to be easy can sometimes be orders of magnitude more difficult than the worst problems in problem classes and regions of the phase transition considered hard. These difficult problems are either hard unsatisfiable problems or are satisfiable problems which give a hard unsatisfiable subproblem following a wrong split. Whilst these hard unsatisfiable problems may have short proofs, these appear to be difficult to find, and other proofs are long and hard.", "WC study the complexity of proving unsatisfiability for random k-CNP formulas with clause density A = m n where 111 is number of clauses and n is the number of variables. We prove the first nontrivial general upper bound, giving algorithmo that, in particular, for k = 3 produce refutations almost certainly in time 20t”iA). This is polynomial when", "", "For every choice of positive integers c and k such that k ≥ 3 and c 2 - k ≥ 0.7, there is a positive number e such that, with probability tending to 1 as n tends to ∞, a randomly chosen family of cn clauses of size k over n variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + e) n clauses.", "Abstract The distribution of hard graph coloring problems as a function of graph connectivity is shown to have two distinct transition behaviors. The first, previously recognized, is a peak in the median search cost near the connectivity at which half the graphs have solutions. This region contains a high proportion of relatively hard problem instances. However, the hardest instances are in fact concentrated at a second, lower, transition point. Near this point, most problems are quite easy, but there are also a few very hard cases. This region of exceptionally hard problems corresponds to the transition between polynomial and exponential scaling of the average search cost, whose location we also estimate theoretically. These behaviors also appear to arise in other constraint problems. This work also shows the limitations of simple measures of the cost distribution, such as mean or median, for identifying outlying cases.", "" ] }
cs0503011
2949386080
In-degree, PageRank, number of visits and other measures of Web page popularity significantly influence the ranking of search results by modern search engines. The assumption is that popularity is closely correlated with quality, a more elusive concept that is difficult to measure directly. Unfortunately, the correlation between popularity and quality is very weak for newly-created pages that have yet to receive many visits and or in-links. Worse, since discovery of new content is largely done by querying search engines, and because users usually focus their attention on the top few results, newly-created but high-quality pages are effectively shut out,'' and it can take a very long time before they become popular. We propose a simple and elegant solution to this problem: the introduction of a controlled amount of randomness into search result ranking methods. Doing so offers new pages a chance to prove their worth, although clearly using too much randomness will degrade result quality and annul any benefits achieved. Hence there is a tradeoff between exploration to estimate the quality of new pages and exploitation of pages already known to be of high quality. We study this tradeoff both analytically and via simulation, in the context of an economic objective function based on aggregate result quality amortized over time. We show that a modest amount of randomness leads to improved search results.
The exploration exploitation tradeoff that arises in our context is akin to problems studied in the field of reinforcement learning @cite_20 . However, direct application of reinforcement learning algorithms appears prohibitively expensive at Web scales.
{ "cite_N": [ "@cite_20" ], "mid": [ "2107726111" ], "abstract": [ "This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word \"reinforcement.\" The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning." ] }
cs0503047
2098985161
We consider the capacity problem for wireless networks. Networks are modeled as random unit-disk graphs, and the capacity problem is formulated as one of finding the maximum value of a multicommodity flow. In this paper, we develop a proof technique based on which we are able to obtain a tight characterization of the solution to the linear program associated with the multiflow problem, to within constants independent of network size. We also use this proof method to analyze network capacity for a variety of transmitter receiver architectures, for which we obtain some conclusive results. These results contain as a special case (and strengthen) those of Gupta and Kumar for random networks, for which a new derivation is provided using only elementary counting and discrete probability tools.
This work is primarily motivated by our struggle to understand the results of Gupta and Kumar on the capacity of wireless networks @cite_19 . And the main idea behind our approach is simple: the transport capacity problem posed in @cite_19 , in the context of random networks, is essentially a throughput stability problem---the goal is to determine how much data can be injected by each node into the network while keeping the system stable---, and this throughput stability problem admits a very simple formulation in term of flow networks. Note also that because of the mechanism for generating source destination pairs, all connections have the same average length (one half of one network diameter), and thus we do not need to deal with the bit-meters sec metric considered in @cite_19 .
{ "cite_N": [ "@cite_19" ], "mid": [ "2137775453" ], "abstract": [ "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ] }
cs0503047
2098985161
We consider the capacity problem for wireless networks. Networks are modeled as random unit-disk graphs, and the capacity problem is formulated as one of finding the maximum value of a multicommodity flow. In this paper, we develop a proof technique based on which we are able to obtain a tight characterization of the solution to the linear program associated with the multiflow problem, to within constants independent of network size. We also use this proof method to analyze network capacity for a variety of transmitter receiver architectures, for which we obtain some conclusive results. These results contain as a special case (and strengthen) those of Gupta and Kumar for random networks, for which a new derivation is provided using only elementary counting and discrete probability tools.
As mentioned before, @cite_19 sparked significant interest in these problems. Follow up results from the same group were reported in @cite_10 @cite_29 . Some information theoretic bounds for large-area networks were obtained in @cite_30 . When nodes are allowed to move, assuming transmission delays proportional to the mixing time of the network, the total network throughput is @math , and therefore the network can carry a non-vanishing rate per node @cite_2 . Using a linear programming formulation, non-asymptotic versions of the results in @cite_19 are given in @cite_22 ; an extended version of that work can be found in @cite_15 . An alternative method for deriving transport capacity was presented in @cite_1 . The capacity of large Gaussian relay networks was found in @cite_3 . Preliminary versions of our work based on network flows have appeared in @cite_9 @cite_32 ; and network flow techniques have been proposed to study network capacity problems (cf., e.g., @cite_34 , [Ch. 14.10] cover-thomas:it-book ), and network coding problems @cite_0 . From the network coding literature, of particular relevance to this work is the work on multiple unicast sessions @cite_26 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_29", "@cite_9", "@cite_1", "@cite_32", "@cite_3", "@cite_0", "@cite_19", "@cite_2", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2110225498", "", "2165510626", "", "2131992719", "2108190032", "", "2138203492", "2138928022", "2137775453", "59733848", "1747529848", "2105831729", "2136812100" ], "abstract": [ "We derive an information-theoretic upper bound on the rate per communication pair in a large ad hoc wireless network. We show that under minimal conditions on the attenuation due to the environment and for networks with a constant density of users, this rate tends to zero as the number of users gets large.", "", "We define and study capacity regions for wireless ad hoc networks with an arbitrary number of nodes and topology. These regions describe the set of achievable rate combinations between all source-destination pairs in the network under various transmission strategies, such as variable-rate transmission, single-hop or multihop routing, power control, and successive interference cancellation (SIC). Multihop cellular networks and networks with energy constraints are studied as special cases. With slight modifications, the developed formulation can handle node mobility and time-varying flat-fading channels. Numerical results indicate that multihop routing, the ability for concurrent transmissions, and SIC significantly increase the capacity of ad hoc and multihop cellular networks. On the other hand, gains from power control are significant only when variable-rate transmission is not used. Also, time-varying flat-fading and node mobility actually improve the capacity. Finally, multihop routing greatly improves the performance of energy-constraint networks.", "", "We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unit-disk graphs. Equipped with the max-flow min-cut theorem as our basic analysis tool, we obtain rates of growth under three models of communication: (a) omnidirectional transmissions; (b) \"simple\" directional transmissions, in which sending nodes generate a single beam aimed at a particular receiver; and (c) \"complex\" directional transmissions, in which sending nodes generate multiple beams aimed at multiple receivers. Our main finding is that an increase of Θlog2n in maximum stable throughput is all that can be achieved by allowing arbitrarily complex signal processing (in the form of generation of directed beams) at the transmitters and receivers. We conclude therefore that neither directional antennas, nor the ability to communicate simultaneously with multiple nodes, can be expected in practice to effectively circumvent the constriction on capacity in dense networks that results from the geometric layout of nodes in space.", "We address the problem of how throughput in a wireless network scales as the number of users grows. Following the model of Gupta and Kumar, we consider n identical nodes placed in a fixed area. Pairs of transmitters and receivers wish to communicate but are subject to interference from other nodes. Throughput is measured in bit-meters per second. We provide a very elementary deterministic approach that gives achievability results in terms of three key properties of the node locations. As a special case, we obtain spl Omega ( spl radic n) throughput for a general class of network configurations in a fixed area. Results for random node locations in a fixed area can also be derived as special cases of the general result by verifying the growth rate of three parameters. For example, as a simple corollary of our result we obtain a stronger (almost sure) version of the spl radic n spl radic (logn) throughput for random node locations in a fixed area obtained by Gupta and Kumar. Results for some other interesting non-independent and identically distributed (i.i.d.) node distributions are also provided.", "", "The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.", "We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "", "We introduce a methodology for studying wireless ad hoc networks in a multihop traffic environment. Our approach is to use theoretical upper bounds on network performance for evaluating the effects of various design choices: we focus on power control, the queuing discipline, the choice of routing and media access protocols, and their interactions. Using this framework, we then concentrate on the problem of medium access for wireless multihop networks. We first study CSMA CA, and find that its performance strongly depends on the choice of the accompanying routing protocol. We then introduce two protocols that outperform CSMA CA, both in terms of energy efficiency and achievable throughput. The progressive back off algorithm (PBOA) performs medium access jointly with power control. The progressive rump up algorithm (PRUA) sacrifices energy efficiency in favor of higher throughput. Both protocols slot time, and are integrated with queuing disciplines that are more relaxed than the first in first out (FIFO) rule. They are totally distributed and the overhead they require does not increase with the size and node density of the network.", "We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.", "We study communication networks of arbitrary size and topology and communicating over a general vector discrete memoryless channel (DMC). We propose an information-theoretic constructive scheme for obtaining an achievable rate region in such networks. Many well-known capacity-defining achievable rate regions can be derived as special cases of the proposed scheme. A few such examples are the physically degraded and reversely degraded relay channels, the Gaussian multiple-access channel, and the Gaussian broadcast channel. The proposed scheme also leads to inner bounds for the multicast and allcast capacities. Applying the proposed scheme to a specific wireless network of n nodes located in a region of unit area, we show that a transport capacity of spl Theta (n) bit-meters per second (bit-meters s) is feasible in a certain family of networks, as compared to the best possible transport capacity of spl Theta ( spl radic n) bit-meters s in (2000), where the receiver capabilities were limited. Even though the improvement is shown for a specific class of networks, a clear implication is that designing and employing more sophisticated multiuser coding schemes can provide sizable gains in at least some large wireless networks." ] }
cs0503061
2949324727
We introduce the use, monitoring, and enforcement of integrity constraints in trust management-style authorization systems. We consider what portions of the policy state must be monitored to detect violations of integrity constraints. Then we address the fact that not all participants in a trust management system can be trusted to assist in such monitoring, and show how many integrity constraints can be monitored in a conservative manner so that trusted participants detect and report if the system enters a policy state from which evolution in unmonitored portions of the policy could lead to a constraint violation.
In , we listed several papers presenting various trust management systems. None of these incorporates a notion of integrity constrains. The work in trust management that is most closely related is @cite_13 . As we discussed at the beginning of , that work is complimentary to ours. It studies the problem of determining, given a state , a role monitor , and a constraint @math , whether there is a reachable state in which @math is violated. By contrast, we analyze the problem of which roles must have their definitions monitored to detect when such a is entered.
{ "cite_N": [ "@cite_13" ], "mid": [ "2043144080" ], "abstract": [ "Trust management is a form of distributed access control that allows one principal to delegate some access decisions to other principals. While the use of delegation greatly enhances flexibility and scalability, it may also reduce the control that a principal has over the resources it owns. Security analysis asks whether safety, availability, and other properties can be maintained while delegating to partially trusted principals. We show that in contrast to the undecidability of classical Harrison--Ruzzo--Ullman safety properties, our primary security properties are decidable. In particular, most security properties we study are decidable in polynomial time. The computational complexity of containment analysis, the most complicated security property we study, varies according to the expressive power of the trust management language." ] }
cs0502003
1709954365
We consider the simulation of wireless sensor networks (WSN) using a new approach. We present Shawn, an open-source discrete-event simulator that has considerable differences to all other existing simulators. Shawn is very powerful in simulating large scale networks with an abstract point of view. It is, to the best of our knowledge, the first simulator to support generic high-level algorithms as well as distributed protocols on exactly the same underlying networks.
The TinyOS mote simulator'' simulates TinyOS @cite_16 motes at the bit level and is hence a platform-specific simulator emulator. It directly compiles code written for TinyOS to an executable file that can be run on standard PC equipment. Using this technique, developers can test their implementation without having to deploy it on real sensor network hardware. TOSSIM can run simulations with a few thousand virtual TinyOS nodes. It ships with a GUI ( TinyViz'') that can visualize and interact with running simulations. Just recently, PowerTOSSIM @cite_3 , a power modeling extension, has been integrated into TOSSIM. PowerTOSSIM models the power consumed by TinyOS applications and includes a detailed model of the power consumption of the Mica2 @cite_13 motes.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_3" ], "mid": [ "1971903460", "", "2110936068" ], "abstract": [ "We present nesC, a programming language for networked embedded systems that represent a new design space for application developers. An example of a networked embedded system is a sensor network, which consists of (potentially) thousands of tiny, low-power \"motes,\" each of which execute concurrent, reactive programs that must operate with severe memory and power constraints.nesC's contribution is to support the special needs of this domain by exposing a programming model that incorporates event-driven execution, a flexible concurrency model, and component-oriented application design. Restrictions on the programming model allow the nesC compiler to perform whole-program analyses, including data-race detection (which improves reliability) and aggressive function inlining (which reduces resource consumption).nesC has been used to implement TinyOS, a small operating system for sensor networks, as well as several significant sensor applications. nesC and TinyOS have been adopted by a large number of sensor network research groups, and our experience and evaluation of the language shows that it is effective at supporting the complex, concurrent programming style demanded by this new class of deeply networked systems.", "", "Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals. In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations." ] }
cs0502025
2950320586
The software approach to developing Digital Signal Processing (DSP) applications brings some great features such as flexibility, re-usability of resources and easy upgrading of applications. However, it requires long and tedious tests and verification phases because of the increasing complexity of the software. This implies the need of a software programming environment capable of putting together DSP modules and providing facilities to debug, verify and validate the code. The objective of the work is to provide such facilities as simulation and verification for developing DSP software applications. This led us to develop an extension toolkit, Epspectra, built upon Pspectra, one of the first toolkits available to design basic software radio applications on standard PC workstations. In this paper, we first present Epspectra, an Esterel-based extension of Pspectra that makes the design and implementation of portable DSP applications easier. It allows drastic reduction of testing and verification time while requiring relatively little expertise in formal verification methods. Second, we demonstrate the use of Epspectra, taking as an example the radio interface part of a GSM base station. We also present the verification procedures for the three safety properties of the implementation programs which have complex control-paths. These have to obey strict scheduling rules. In addition, Epspectra achieves the verification of the targeted application since the same model is used for the executable code generation and for the formal verification.
@cite_3 proposed to dynamically select a suitable partitioning according to the property to be proved, avoiding exponential explosion of the analysis caused by in-depth detailed partitioning.
{ "cite_N": [ "@cite_3" ], "mid": [ "1582451030" ], "abstract": [ "We apply linear relation analysis [CH78, HPR97] to the verification of declarative synchronous programs [Hal98]. In this approach, state partitioning plays an important role: on one hand the precision of the results highly depends on the fineness of the partitioning; on the other hand, a too much detailed partitioning may result in an exponential explosion of the analysis. In this paper, we propose to dynamically select a suitable partitioning according to the property to be proved." ] }
cs0502056
2949328284
The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded @math AuthorRank$ as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).
Social network analysis is based on the premise that the relationships between social actors can be described by a graph. The graph's nodes represent social actors and the graph's edges connect pairs of nodes and thus represent social interactions. This representation allows researchers to apply graph theory @cite_12 to the analysis of what would otherwise be considered an inherently elusive and poorly understood problem: the tangled web of our social interactions. In this article, we will assume such graph representation and use the terms , , and interchangeably. The terms , , and are also used interchangeably.
{ "cite_N": [ "@cite_12" ], "mid": [ "2061901927" ], "abstract": [ "Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions." ] }
cs0502056
2949328284
The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded @math AuthorRank$ as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).
An early example of a co-authorship network is the Erd " o s Number Project, in which the smallest number of co-authorship links between any individual mathematician and the Hungarian mathematician Erd " o s are calculated @cite_23 . (A mathematician's Erd " o s Number'' is analogous to an actor's Bacon Number''.) Newman studied and compared the co-authorship graph of arXiv, Medline, SPIRES, and NCSTRL @cite_19 @cite_3 and found a number of network differences between experimental and theoretical disciplines. Co-authorship analysis has also been applied to various ACM conferences: Information Retrieval (SIGIR) @cite_9 , Management of Data (SIGMOD) @cite_17 and Hypertext @cite_14 , as well as mathematics and neuroscience @cite_6 , information systems @cite_5 , and the field of social network analysis @cite_27 . International co-authorship networks have been studied in Journal of American Society for Information Science & Technology (JASIST) @cite_4 and Science Citation Index @cite_33 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_9", "@cite_3", "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_17" ], "mid": [ "1974087593", "2060460295", "", "2068557399", "1671906456", "2155969378", "2025572017", "2127775997", "", "", "" ], "abstract": [ "This paper presents the analysis and modelling of two data sets associated with the literature of hypertext as represented by the ACM Hypertext conference series. This work explores new ways of organising and accessing the vast amount of interrelated information. The first data set, including all the full papers published in this series (1987 1998), is structured and visualised as a semantic space. This semantic space provides an access point for each paper in this collection. The second data set, containing author co-citation counts based on nine conferences in the series (1989 1998), is analysed and mapped in its entirety and in three evenly distributed sub-periods. Specialties  major research fronts in the field of hypertext  are identified based on the results of a factor analysis and corresponding author co-citation maps. The names of authors in these maps are linked to the bibliographical and citation summaries of these authors on the WWW.", "This article reports findings from a study of the geographic distribution of foreign authors in the Journal of American Society for Information Science & Technology (JASIST) and Journal of Documentation (JDoc). Bibliographic data about foreign authors and their geographic locations from a 50-year publication period (1950-1999) are analyzed per 5-year period for both JASIST and JDoc. The distribution of foreign authors by geographic locations was analyzed for the overall trends in JASIST and JDoc. UK and Canadian authors are the most frequent foreign authors in JASIST. Authors from the United States and Canada are the most frequent foreign authors in JDoc. The top 10 geographic locations with highest number of foreign authors and the top 10 most productive foreign authors were also identified and compared for their characteristics and trends.", "", "As part of the celebration of twenty-five years of ACM SIGIR conferences we performed a content analysis of all papers published in the proceedings of SIGIR conferences, including those from 2002. From this we determined, using information retrieval approaches of course, which topics had come and gone over the last two and a half decades, and which topics are currently \"hot\". We also performed a co-authorship analysis among authors of the 853 SIGIR conference papers to determine which author is the most \"central\" in terms of a co-authorship graph and is our equivalent of Paul Erdos in Mathematics. In the first section we report on the content analysis, leading to our prediction as to the most topical paper likely to appear at SIGIR2003. In the second section we present details of our co-authorship analysis, revealing who is the \"Christopher Lee\" of SIGIR, and in the final section we give pointers to where readers who are SIGIR conference paper authors may find details of where they fit into the coauthorship graph.", "Using data from computer databases of scientific papers in physics, biomedical research, and computer science, we have constructed networks of collaboration between scientists in each of these disciplines. In these networks two scientists are considered connected if they have coauthored one or more papers together. We have studied many statistical properties of our networks, including numbers of papers written by authors, numbers of authors per paper, numbers of collaborators that scientists have, typical distance through the network from one scientist to another, and a variety of measures of connectedness within a network, such as closeness and betweenness. We further argue that simple networks such as these cannot capture the variation in the strength of collaborative ties and propose a measure of this strength based on the number of papers coauthored by pairs of scientists, and the number of other scientists with whom they worked on those papers. Using a selection of our results, we suggest a variety of possible ways to answer the question, \"Who is the best connected scientist?\"", "We analyze growing networks ranging from collaboration graphs of scientists to the network of similarities defined among the various transcriptional profiles of living cells. For the explicit demonstration of the scale-free nature and hierarchical organization of these graphs, a deterministic construction is also used. We demonstrate the use of determining the eigenvalue spectra of sparse random graph models for the categorization of small measured networks.", "", "Social network analysis (SNA) is not a formal theory in sociology but rather a strategy for investigating social structures. As it is an idea that can be applied in many fields, we study, in partic...", "", "", "" ] }
cs0502088
2950691337
In [Hitzler and Wendt 2002, 2005], a new methodology has been proposed which allows to derive uniform characterizations of different declarative semantics for logic programs with negation. One result from this work is that the well-founded semantics can formally be understood as a stratified version of the Fitting (or Kripke-Kleene) semantics. The constructions leading to this result, however, show a certain asymmetry which is not readily understood. We will study this situation here with the result that we will obtain a coherent picture of relations between different semantics for normal logic programs.
Loyer, Spyratos and Stamate, in @cite_11 , presented a parametrized approach to different semantics. It allows to substitute the preference for falsehood by preference for truth in the stable and well-founded semantics, but uses entirely different means than presented here. Its purpose is also different --- while we focus on the strenghtening of the mathematical foundations for the field, the work in @cite_11 is motivated by the need to deal with open vs. closed world assumption in some application settings. The exact relationship between their approach and ours remains to be worked out.
{ "cite_N": [ "@cite_11" ], "mid": [ "2141863299" ], "abstract": [ "The different semantics that can be assigned to a logic program correspond to different assumptions made concerning the atoms that are rule heads and whose logical values cannot be inferred from the rules. For example, the well founded semantics corresponds to the assumption that every such atom is false, while the Kripke-Kleene semantics corresponds to the assumption that every such atom is unknown. In this paper, we propose to unify and extend this assumption-based approach by introducing parameterized semantics for logic programs. The parameter holds the value that one assumes for all rule heads whose logical values cannot be inferred from the rules. We work within multi-valued logic with bilattice structure, and we consider the class of logic programs defined by Fitting.Following Fitting's approach, we define an operator that allows us to compute the parameterized semantic, and to compare and combine semantics obtained for different values of the parameter. We show that our approach captures and extends the usual semantics of conventional logic programs thereby unifying their computation." ] }
cs0501006
2949144408
The paper considers various formalisms based on Automata, Temporal Logic and Regular Expressions for specifying queries over sequences. Unlike traditional binary semantics, the paper presents a similarity based semantics for thse formalisms. More specifically, a distance measure in the range [0,1] is associated with a sequence, query pair denoting how closely the sequence satisfies the query. These measures are defined using a spectrum of normed vector distance measures. Various distance measures based on the syntax and the traditional semantics of the query are presented. Efficient algorithms for computing these distance measure are presented. These algorithms can be employed for retrieval of sequence from a database that closely satisfy a given.
There have been various formalisms for representing uncertainty (see @cite_21 ) such as probability measures, Dempster-Shafer belief functions, plausibility measures, etc. Our similarity measures for temporal logics and automata can possibly be categorized under plausibility measures and they are quite different from probability measures. The book @cite_21 also describes logics for reasoning about uncertainty. Also, probabilistic versions of Propositional Dynamic Logics were presented in @cite_25 . However, these works do not consider logics and formalisms on sequences, and do not use the various vector distance measures considered in this paper.
{ "cite_N": [ "@cite_21", "@cite_25" ], "mid": [ "1526328753", "1982800675" ], "abstract": [ "In order to deal with uncertainty intelligently, we need to be able to represent it and reason about it. In this book, Joseph Halpern examines formal ways of representing uncertainty and considers various logics for reasoning about it. While the ideas presented are formalized in terms of definitions and theorems, the emphasis is on the philosophy of representing and reasoning about uncertainty. Halpern surveys possible formal systems for representing uncertainty, including probability measures, possibility measures, and plausibility measures; considers the updating of beliefs based on changing information and the relation to Bayes' theorem; and discusses qualitative, quantitative, and plausibilistic Bayesian networks. This second edition has been updated to reflect Halpern's recent research. New material includes a consideration of weighted probability measures and how they can be used in decision making; analyses of the Doomsday argument and the Sleeping Beauty problem; modeling games with imperfect recall using the runs-and-systems approach; a discussion of complexity-theoretic considerations; the application of first-order conditional logic to security. Reasoning about Uncertainty is accessible and relevant to researchers and students in many fields, including computer science, artificial intelligence, economics (particularly game theory), mathematics, philosophy, and statistics.", "In this paper we give a probabilistic analog PPDL of Propositional Dynamic Logic. We prove a small model property and give a polynomial space decision procedure for formulas involving well-structured programs. We also give a deductive calculus and illustrate its use by calculating the expected running time of a simple random walk program." ] }
cs0501006
2949144408
The paper considers various formalisms based on Automata, Temporal Logic and Regular Expressions for specifying queries over sequences. Unlike traditional binary semantics, the paper presents a similarity based semantics for thse formalisms. More specifically, a distance measure in the range [0,1] is associated with a sequence, query pair denoting how closely the sequence satisfies the query. These measures are defined using a spectrum of normed vector distance measures. Various distance measures based on the syntax and the traditional semantics of the query are presented. Efficient algorithms for computing these distance measure are presented. These algorithms can be employed for retrieval of sequence from a database that closely satisfy a given.
Since the appearance of a preliminary version of this paper @cite_19 , other non-probabilistic quantitative versions of temporal logic have been proposed in @cite_23 @cite_13 . Both these works consider infinite computations and branching time temporal logics. The similarity measure they give, for the linear time fragment of their logic, corresponds to the infinite norm among the vector distance functions. On the contrary, we consider formalism and logics on finite sequences and give similarity based measures that use a spectrum vector of distance measures. We also present methods fo computing similarity values of a database sequence with respect to queries given in the different formalisms.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_23" ], "mid": [ "1712909018", "", "2109187338" ], "abstract": [ "Similarity based retrieval is of major importance for querying sequence databases. We consider formalisms based on automata, temporal logics and regular expressions for querying such databases. We define two different types of similarity measures--syntax based and semantics based. These measures are divided into a spectrum of measures based on the vector distance function that is employed. We consider norm vector distance functions and give efficient query processing algorithms when these measures are employed.", "", "Temporal logic is two-valued: formulas are interpreted as either true or false. When applied to the analysis of stochastic systems, or systems with imprecise formal models, temporal logic is therefore fragile: even small changes in the model can lead to opposite truth values for a specification. We present a generalization of the branching-time logic CTL which achieves robustness with respect to model perturbations by giving a quantitative interpretation to predicates and logical operators, and by discounting the importance of events according to how late they occur. In every state, the value of a formula is a real number in the interval [0,1], where 1 corresponds to truth and 0 to falsehood. The boolean operators and and or are replaced by min and max, the path quantifiers ∃ and ¬ determine sup and inf over all paths from a given state, and the temporal operators ♦ and □ specify sup and inf over a given path; a new operator averages all values along a path. Furthermore, all path operators are discounted by a parameter that can be chosen to give more weight to states that are closer to the beginning of the path.We interpret the resulting logic DCTL over transition systems, Markov chains, and Markov decision processes. We present two semantics for DCTL: a path semantics, inspired by the standard interpretation of state and path formulas in CTL, and a fixpoint semantics, inspired by the µ-calculus evaluation of CTL formulas. We show that, while these semantics coincide for CTL, they differ for DCTL, and we provide model-checking algorithms for both semantics." ] }
cs0412007
2140668275
Mapping the Internet generally consists in sampling the network from a limited set of sources by using traceroute-like probes. This methodology, akin to the merging of different spanning trees to a set of destination, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. In this paper, we explore these biases and provide a statistical analysis of their origin. We derive an analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular, we find that the edge and vertex detection probability depends on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with broad distributions of connectivity. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in network models with different topologies. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. Moreover, we characterize the level of redundancy and completeness of the exploration process as a function of the topological properties of the network. Finally, we study numerically how the fraction of vertices and edges discovered in the sampled graph depends on the particular deployements of probing sources. The results might hint the steps toward more efficient mapping strategies.
In this section, we shortly review some recent works devoted to the sampling of graphs by shortest path probing procedures. @cite_20 have shown that biases can seriously affect the estimation of degree distributions. In particular, power-law like distributions can be observed for subgraphs of Erd "os-R 'enyi random graphs when the subgraph is the product of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process. In a recent work @cite_23 , Clauset and Moore have given analytical foundations to the numerical work of @cite_20 . They have modeled the single source probing to all possible destinations using differential equations. For an Erd "os-Renyi random graph with average degree @math , they have found that the connectivity distribution of the obtained spanning tree displays a power-law behavior @math , with an exponential cut-off setting in at a characteristic degree @math .
{ "cite_N": [ "@cite_23", "@cite_20" ], "mid": [ "1693225151", "2107648668" ], "abstract": [ "Despite great effort spent measuring topological features of large networks like the Internet, it was recently argued that sampling based on taking paths through the network (e.g., traceroutes) introduces a fundamental bias in the observed degree distribution. We examine this bias analytically and experimentally. For classic random graphs with mean degree c, we show analytically that traceroute sampling gives an observed degree distribution P(k) 1 k for k < c, even though the underlying degree distribution is Poisson. For graphs whose degree distributions have power-law tails P(k) k^-alpha, the accuracy of traceroute sampling is highly sensitive to the population of low-degree vertices. In particular, when the graph has a large excess (i.e., many more edges than vertices), traceroute sampling can significantly misestimate alpha.", "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias." ] }
cs0412007
2140668275
Mapping the Internet generally consists in sampling the network from a limited set of sources by using traceroute-like probes. This methodology, akin to the merging of different spanning trees to a set of destination, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. In this paper, we explore these biases and provide a statistical analysis of their origin. We derive an analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular, we find that the edge and vertex detection probability depends on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with broad distributions of connectivity. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in network models with different topologies. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. Moreover, we characterize the level of redundancy and completeness of the exploration process as a function of the topological properties of the network. Finally, we study numerically how the fraction of vertices and edges discovered in the sampled graph depends on the particular deployements of probing sources. The results might hint the steps toward more efficient mapping strategies.
In a slightly different context, Petermann and De Los Rios have studied a traceroute -like procedure on various examples of scale-free graphs @cite_10 , showing that, in the case of a single source, power-law distributions with underestimated exponents are obtained. Analytical estimates of the measured exponents as a function of the true ones were also derived. Finally, a recent preprint by Guillaume and Latapy @cite_27 reports about the shortest-paths explorations of synthetic graphs, focusing on the comparison between properties of the resulting sampled graph with those of the original network. The proportion of discovered vertices and edges in the graph as a function of the number of sources and targets gives also hints for an optimization of the exploration process.
{ "cite_N": [ "@cite_27", "@cite_10" ], "mid": [ "1479935453", "2964134979" ], "abstract": [ "Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conduced to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, we show that the exploration process induces some properties on the maps. We validate our analysis using real-world data and experiments and we discuss its implications.", "The increased availability of data on real networks has favoured an explosion of activity in the elaboration of models able to reproduce both qualitatively and quantitatively the measured properties. What has been less explored is the reliability of the data, and whether the measurement technique biases them. Here we show that tree-like explorations (similar in principle to traceroute) can indeed change the measured exponents of a scale-free network." ] }
cond-mat0412368
2949195487
Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, it works at various scales, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Experimental evaluation shows that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time. This is very promising because our algorithm can be improved in several ways, which we sketch at the end of the paper.
In the current situation, one can process graphs with up to a few hundreds of thousands vertices using the method in @cite_17 . All other algorithms have more limited performances (they generally cannot manage more than some thousands of vertices).
{ "cite_N": [ "@cite_17" ], "mid": [ "2047940964" ], "abstract": [ "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers." ] }
cs0412021
2950884425
A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency.
reworded Lhomme @cite_15 defines which formalizes bounds propagation for both integer and real constraints. He proposes an efficient propagation algorithm implementing arc B-consistency with complexity analysis and experimental results. However, his study focuses on constraints defined by numeric relations (i.e. numeric CSPs). Lhomme @cite_15 defines which formalizes bounds propagation techniques for numeric CSPs. Unlike our definition of CSPs, constraints in numeric CSPs cannot be given extensionally and must be defined by numeric relations, which can be interpreted in either the real or the finite integer domain. Numeric CSPs also restrict the domain of variables to be a single interval.
{ "cite_N": [ "@cite_15" ], "mid": [ "1548650523" ], "abstract": [ "Many problems can be expressed in terms of a numeric constraint satisfaction problem over finite or continuous domains (numeric CSP). The purpose of this paper is to show that the consistency techniques that have been developed for CSPs can be adapted to numeric CSPs. Since the numeric domains are ordered the underlying idea is to handle domains only by their bounds. The semantics that have been elaborated, plus the complexity analysis and good experimental results, confirm that these techniques can be used in real applications." ] }
cs0412021
2950884425
A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency.
Maher @cite_9 introduces the notion of propagation completeness together with a general framework to unify a wide range of consistency. These include hull consistency of real constraints and consistency of integer constraints. Propagation completeness aims to capture the timeliness property of propagation.
{ "cite_N": [ "@cite_9" ], "mid": [ "1594857028" ], "abstract": [ "We develop a framework for addressing correctness and timeliness-of-propagation issues for reactive constraints - global constraints or user-defined constraints that are implemented through constraint propagation. The notion of propagation completeness is introduced to capture timeliness of constraint propagation. A generalized form of arc-consistency is formulated which unifies many local consistency conditions in the literature. We show that propagation complete implementations of reactive constraints achieve this arc-consistency when propagation quiesces. Finally, we use the framework to state and prove an impossibility result: that CHR cannot implement a common relation with a desirable degree of timely constraint propagation." ] }
cs0412021
2950884425
A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency.
The application of bounds consistency is not limited to integer and real constraints. Bounds consistency has been formalized for solving set constraints @cite_13 , and more recently, multiset constraints @cite_12 .
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "2085716817", "1500096329" ], "abstract": [ "Local consistency techniques have been introduced in logic programming in order to extend the application domain of logic programming languages. The existing languages based on these techniques consider arithmetic constraints applied to variables ranging over finite integer domains. This makes difficult a natural and concise modelling as well as an efficient solving of a class of N P-complete combinatorial search problems dealing with sets. To overcome these problems, we propose a solution which consists in extending the notion of integer domains to that of set domains (sets of sets). We specify a set domain by an interval whose lower and upper bounds are known sets, ordered by set inclusion. We define the formal and practical framework of a new constraint logic programming language over set domains, called Conjunto. Conjunto comprises the usual set operation symbols( n), and the set inclusion relation (). Set expressions built using the operation symbols are interpreted as relations (s s 1 = s 2 ,...). In addition, Conjunto provides us with a set of constraints called graduated constraints (e.g. the set cardinality) which map sets onto arithmetic terms. This allows us to handle optimization problems by applying a cost function to the quantiiable, i.e., arithmetic, terms which are associated to set terms. The constraint solving in Conjunto is based on local consistency techniques using interval reasoning which are extended to handle set constraints. The main contribution of this paper concerns the formal deenition of the language and its design and implementation as a practical language.", "We study from a formal perspective the consistency and propagation of constraints involving multiset variables. That is, variables whose values are multisets. These help us model problems more naturally and can, for example, prevent introducing unnecessary symmetry into a model. We identify a number of different representations for multiset variables and compare them. We then propose a definition of local consistency for constraints involving multiset, set and integer variables. This definition is a generalization of the notion of bounds consistency for integer variables. We show how this local consistency property can be enforced by means of some simple inference rules which tighten bounds on the variables. We also study a number of global constraints on set and multiset variables. Surprisingly, unlike finite domain variables, the decomposition of global constraints over set or multiset variables often does not hinder constraint propagation." ] }
cs0412041
1670256845
An efficient and flexible engine for computing fixed points is critical for many practical applications. In this paper, we firstly present a goal-directed fixed point computation strategy in the logic programming paradigm. The strategy adopts a tabled resolution (or memorized resolution) to mimic the efficient semi-naive bottom-up computation. Its main idea is to dynamically identify and record those clauses that will lead to recursive variant calls, and then repetitively apply those alternatives incrementally until the fixed point is reached. Secondly, there are many situations in which a fixed point contains a large number or even infinite number of solutions. In these cases, a fixed point computation engine may not be efficient enough or feasible at all. We present a mode-declaration scheme which provides the capabilities to reduce a fixed point from a big solution set to a preferred small one, or from an infeasible infinite set to a finite one. The mode declaration scheme can be characterized as a meta-level operation over the original fixed point. We show the correctness of the mode declaration scheme. Thirdly, the mode-declaration scheme provides a new declarative method for dynamic programming, which is typically used for solving optimization problems. There is no need to define the value of an optimal solution recursively, instead, defining a general solution suffices. The optimal value as well as its corresponding concrete solution can be derived implicitly and automatically using a mode-directed fixed point computation engine. Finally, this fixed point computation engine has been successfully implemented in a commercial Prolog system. Experimental results are shown to indicate that the mode declaration improves both time and space performances in solving dynamic programming problems.
The huge implementation effort needed for implementing OLDT and SLG can be avoided by choosing alternative methods for tabled resolutions that maintain a single computation tree similar to traditional SLD resolution, rather than maintaining a forest of SLD trees . SLDT resolution @cite_23 @cite_9 was the first attempt in this direction. The main idea behind SLDT is to steal the backtracking point---using the terminology in @cite_23 @cite_9 ---of the previous tabled call when a variant call is found, to avoid exploring the current recursive clause which may lead to non-termination. However, because the variant call avoids applying the same recursive clause as the previous call, the computation may be incomplete. Thus, repeated computation of tabled calls is required to make up for the lost answers and to make sure that the fixed-point is complete. SLDT does not propose a complete theory regarding when a tabled call is completely evaluated, rather it relies on blindly recomputing the tabled calls to ensure completeness. SLDT resolution was implemented in early versions of B-Prolog system. However, recently this resolution strategy has been discarded, instead, a variant of DRA resolution @cite_22 has been adopted in the latest version of B-Prolog system @cite_1 .
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_22", "@cite_23" ], "mid": [ "2096979400", "2042416159", "2155945137", "2080716125" ], "abstract": [ "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "Early resolution mechanisms proposed for tabling such as OLDT rely on suspension and resumption of subgoals to compute fixpoints. Recently, a new resolution framework called linear tabling has emerged as an alternative tabling method. The idea of linear tabling is to use iterative computation rather than suspension to compute fixpoints. Although linear tabling is simple, easy to implement, and superior in space efficiency, the current implementations are several times slower than XSB, the state-of-the-art implementation of OLDT, due to re-evaluation of looping subgoals. In this paper, we present a new linear tabling method and propose several optimization techniques for fast computation of fixpoints. The optimization techniques significantly improve the performance by avoiding redundant evaluation of subgoals, re-application of clauses, and reproduction of answers in iterative computation. Our implementation of the method in B-Prolog not only consumes an order of magnitude less stack space than XSB for some programs but also compares favorably well with XSB in speed.", "Tabled logic programming (LP) systems have been applied to elegantly and quickly solving very complex problems (e.g., model checking). However, techniquescurren tly employed for incorporating tabling in an existing LP system are quite complex and require considerable change to the LP system. We present a simple technique for incorporating tabling in existing LP systems based on dynamically reordering clauses containing variant callsat run-time. Our simple technique allows tabled evaluation to be performed with a single SLD tree and without the use of complex operations such as freezing of stacks and heap. It can be incorporated in an existing logic programming system with a small amount of effort. Our scheme also facilitates exploitation of parallelism from tabled LP systems. Results of incorporating our scheme in the commercial ALS Prolog system are reported.", "Infinite loops and redundant computations are long recognized open problems in Prolog. Two methods have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG-resolution and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) it makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog; and (2) it is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM." ] }
cs0412042
2952981805
In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3.
Constraint satisfaction problems (CSPs) have always played a central role in this direction of research, since the CSP framework contains many natural computational problems, for example, from graph theory and propositional logic. Moreover, certain CSPs were used to build foundations for the theory of complexity for optimization problems @cite_13 , and some CSPs provided material for the first optimal inapproximability results @cite_8 (see also survey @cite_15 ). In a CSP, informally speaking, one is given a finite collection of constraints on overlapping sets of variables, and the goal is to decide whether there is an assignment of values from a given domain to the variables satisfying all constraints (decision problem) or to find an assignment satisfying maximum number of constraints (optimization problem). In this paper we will focus on the optimization problems, which are known as maximum constraint satisfaction problems, Max CSP for short. The most well-known examples of such problems are Max @math -Sat and Max Cut . Let us now formally define these problems.
{ "cite_N": [ "@cite_15", "@cite_13", "@cite_8" ], "mid": [ "", "2038225707", "1999032440" ], "abstract": [ "", "We define a natural variant of NP, MAX NP, and also a subclass called MAX SNP. These are classes of optimization problems, and in fact contain several natural, well-studied ones. We show that problems in these classes can be approximated with some bounded error. Furthermore, we show that a number of common optimization problems are complete for MAX SNP under a kind of careful transformation (called L-reduction) that preserves approximability. It follows that such a complete problem has a polynomial-time approximation scheme iff the whole class does. These results may help explain the lack of progress on the approximability of a host of optimization problems.", "We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover." ] }
cs0412042
2952981805
In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3.
Note that throughout the paper the values 0 and 1 taken by any predicate will be considered, rather unusually, as integers, not as Boolean values, and addition will always denote the addition of integers. It easy to check that, in the Boolean case, our problem coincides with the Max CSP problem considered in @cite_18 @cite_21 @cite_2 . We say that a predicate is non-trivial if it is not identically 0. Throughout the paper, we assume that @math is finite and contains only non-trivial predicates.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_2" ], "mid": [ "", "1571873445", "2068190866" ], "abstract": [ "", "Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index.", "We study optimization problems that may be expressed as \"Boolean constraint satisfaction problems.\" An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the \"underlying\" constraints as well as on the goal of the optimization task. Here we consider four possible goals: Max CSP (Min CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). Max Ones (Min Ones) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in Max CSP were obtained by Creignou [ J. Comput. System Sci., 51 (1995), pp. 511--522]. In this work we determine tight bounds on the \"approximability\" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in Max Ones, Min CSP, and Min Ones. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, Max Cut, Max Clique, Min Cut, Nearest Codeword, etc. Our results unify recent results on the (in-)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems that have so far been observed only empirically." ] }
cs0412042
2952981805
In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3.
The Max-CSP framework has been well-studied in the Boolean case. Many fundamental results have been obtained, concerning both complexity classifications and approximation properties (see, e.g., @cite_18 @cite_21 @cite_8 @cite_3 @cite_2 @cite_26 ). In the non-Boolean case, a number of results have been obtained that concern exact (superpolynomial) algorithms or approximation properties (see, e.g., @cite_5 @cite_1 @cite_0 @cite_10 ). The main research problem we will look at in this paper is the following.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_21", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "", "1967935161", "1999032440", "1571873445", "", "2028733517", "2097646889", "2068190866", "2084350425", "1546177704" ], "abstract": [ "", "", "We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover.", "Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index.", "", "A boolean constraint satisfaction problem consists of some finite set of constraints (i.e., functions from 0 1-vectors to 0, 1 ) and an instance of such a problem is a set of constraints applied to specified subsets of n boolean variables. The goal is to find an assignment to the variables which satisfy all constraint applications. The computational complexity of optimization problems in connection with such problems has been studied extensively but the results have relied on the assumption that the weights are non-negative. The goal of this article is to study variants of these optimization problems where arbitrary weights are allowed. For the four problems that we consider, we give necessary and sufficient conditions for when the problems can be solved in polynomial time. In addition, we show that the problems are NP-equivalent in all other cases. (C) 2000 Elsevier Science B.V. All rights reserved.", "By the breakthrough work of Hastad [J ACM 48(4) (2001), 798–859], several constraint satisfaction problems are now known to have the following approximation resistance property: Satisfying more clauses than what picking a random assignment would achieve is NP-hard. This is the case for example for Max E3-Sat, Max E3-Lin, and Max E4-Set Splitting. A notable exception to this extreme hardness is constraint satisfaction over two variables (2-CSP); as a corollary of the celebrated Goemans-Williamson algorithm [J ACM 42(6) (1995), 1115–1145], we know that every Boolean 2-CSP has a nontrivial approximation algorithm whose performance ratio is better than that obtained by picking a random assignment to the variables. An intriguing question then is whether this is also the case for 2-CSPs over larger, non-Boolean domains. This question is still open, and is equivalent to whether the generalization of Max 2-SAT to domains of size d, can be approximated to a factor better than (1 − 1 d2). In an attempt to make progress towards this question, in this paper we prove, first, that a slight restriction of this problem, namely, a generalization of linear inequations with two variables per constraint, is not approximation resistant, and, second, that the Not-All-Equal Sat problem over domain size d with three variables per constraint, is approximation resistant, for every d ≥ 3. In the Boolean case, Not-All-Equal Sat with three variables per constraint is equivalent to Max 2-SAT and thus has a nontrivial approximation algorithm; for larger domain sizes, Max 2-SAT can be reduced to Not-All-Equal Sat with three variables per constraint. Our approximation algorithm implies that a wide class of 2-CSPs called regular 2-CSPs can all be approximated beyond their random assignment threshold. © 2004 Wiley Periodicals, Inc. Random Struct. Alg. 2004", "We study optimization problems that may be expressed as \"Boolean constraint satisfaction problems.\" An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the \"underlying\" constraints as well as on the goal of the optimization task. Here we consider four possible goals: Max CSP (Min CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). Max Ones (Min Ones) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in Max CSP were obtained by Creignou [ J. Comput. System Sci., 51 (1995), pp. 511--522]. In this work we determine tight bounds on the \"approximability\" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in Max Ones, Min CSP, and Min Ones. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, Max Cut, Max Clique, Min Cut, Nearest Codeword, etc. Our results unify recent results on the (in-)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems that have so far been observed only empirically.", "We consider the problem MAX CSP over multi-valued domains with variables ranging over sets of size si ≤ s and constraints involving kj ≤ k variables. We study two algorithms with approximation ratios A and B. respectively, so we obtain a solution with approximation ratio max (A, B).The first algorithm is based on the linear programming algorithm of Serna, Trevisan, and Xhafa [Proc. 15th Annual Symp. on Theoret. Aspects of Comput. Sci., 1998, pp. 488-498] and gives ratio A which is bounded below by s1-k. For k = 2, our bound in terms of the individual set sizes is the minimum over all constraints involving two variables of (1 2√s1+ 1 2√s2)2, where s1 and s2 are the set sizes for the two variables.We then give a simple combinatorial algorithm which has approximation ratio B, with B > A e. The bound is greater than s1-k e in general, and greater than s1-k(1 - (s - 1) 2(k - 1)) for s ≤ k - 1, thus close to the s1-k linear programming bound for large k. For k = 2, the bound is 4 9 if s = 2, 1 2(s - 1) if s ≥ 3, and in general greater than the minimum of 1 4S1 + 1 4s2 over constraints with set sizes s1 and s2, thus within a factor of two of the linear programming bound.For the case of k = 2 and s = 2 we prove an integrality gap of 4 9 (1 + O(n-1 2)). This shows that our analysis is tight for any method that uses the linear programming upper bound.", "We present parallel approximation algorithms for maximization problems expressible by integer linear programs of a restricted syntactic form introduced by [BKT96]. One of our motivations was to show whether the approximation results in the framework of holds in the parallel setting. Our results are a confirmation of this, and thus we have a new common framework for both computational settings. Also, we prove almost tight non-approximability results, thus solving a main open question of We obtain the results through the constraint satisfaction problem over multi-valued domains, for which we show non-approximability results and develop parallel approximation algorithms. Our parallel approximation algorithms are based on linear programming and random rounding; they are better than previously known sequential algorithms. The non-approximability results are based on new recent progress in the fields of Probabilistically Checkable Proofs and Multi-Prover One-Round Proof Systems [Raz95, Has97, AS97, RS97]." ] }
cs0412042
2952981805
In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3.
For the Boolean case, Problem was solved in @cite_18 @cite_21 @cite_2 . It appears that a Boolean @math also exhibits a dichotomy in that it either is solvable exactly in polynomial time or else does not admit a PTAS (polynomial-time approximation scheme) unless = . These papers also describe the boundary between the two cases.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_2" ], "mid": [ "", "1571873445", "2068190866" ], "abstract": [ "", "Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index.", "We study optimization problems that may be expressed as \"Boolean constraint satisfaction problems.\" An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the \"underlying\" constraints as well as on the goal of the optimization task. Here we consider four possible goals: Max CSP (Min CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). Max Ones (Min Ones) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in Max CSP were obtained by Creignou [ J. Comput. System Sci., 51 (1995), pp. 511--522]. In this work we determine tight bounds on the \"approximability\" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in Max Ones, Min CSP, and Min Ones. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, Max Cut, Max Clique, Min Cut, Nearest Codeword, etc. Our results unify recent results on the (in-)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems that have so far been observed only empirically." ] }
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
In this section we discuss some related work. In @cite_18 , Roscoe identifies two ways of specifying protocol security goals: firstly, using specifications, and secondly using specifications. An extensional specification describes the intended service provided by the protocol in terms of behavioural equivalence @cite_8 @cite_7 @cite_0 . On the other hand, an intensional specification describes the underlying mechanism of a procotol, in terms of states or events @cite_2 @cite_5 @cite_18 @cite_6 @cite_17 @cite_13 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "", "1991354622", "", "2078142047", "2110423379", "2105237010", "2104130981", "", "" ], "abstract": [ "", "We develop principles and rules for achieving secrecy properties in security protocols. Our approach is based on traditional classification techniques, and extends those techniques to handle concurrent processes that use shared-key cryptography. The rules have the form of typing rules for a basic concurrent language with cryptographic primitives, the spi calculus. They guarantee that, if a protocol typechecks, then it does not leak its secret inputs.", "", "In this paper we present a formal language for specifying and reasoning about cryptographic protocol requirements. We give sets of requirements for key distribution protocols and for key agreement protocols in that language. We look at a key agreement protocol due to Aziz and Diffie that might meet those requirements and show how to specify it in the language of the NRL Protocol Analyzer. We also show how to map our formal requirements to the language of the NRL Protocol Analyzer and use the Analyzer to show that the protocol meets those requirements. In other words, we use the Analyzer to assess the validity of the formulae that make up the requirements in models of the protocol. Our analysis reveals an implicit assumption about implementations of the protocol and reveals subtleties in the kinds of requirements one might specify for similar protocols.", "Security properties such as confidentiality and authenticity may be considered in terms of the flow of messages within a network. To the extent that this characterisation is justified, the use of a process algebra such as Communicating Sequential Processes (CSP) seems appropriate to describe and analyse them. This paper explores ways in which security properties may be described as CSP specifications, how security mechanisms may be captured, and how particular protocols designed to provide these properties may be analysed within the CSP framework. The paper is concerned with the theoretical basis for such analysis. A sketch verification of a simple example is carried out as an illustration.", "We develop a typed process calculus for security protocols in which types convey secrecy properties. We focus on asymmetric communication primitives, especially on public-key encryption. These present special difficulties, partly because they rely on related capabilities (e.g., “public” and “private” keys) with different levels of secrecy and scopes.", "The authors specify authentication protocols as formal objects with precise syntax and semantics, and define a semantic model that characterizes protocol executions. They have identified two basic types of correctness properties, namely, correspondence and secrecy; that underlie the correctness concerns of authentication protocols. Assertions for specifying these properties, and a formal semantics for their satisfaction in the semantic model are defined. The Otway-Rees protocol is used to illustrate the semantic model and the basic correctness properties. >", "", "" ] }