aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_1 , authors state that existing network infrastructure in smart cities can not sustain the traffic generated by sensors. To overcome this problem, an investment in telecommunication infrastructure is required. However, authors proposed to exploit buses in a Delay Tolerant Network (DTN) to transfer data in smart cities. In @cite_5 , the authors introduce mobile cloud servers by installing servers on vehicles and use them in relief efforts of large-scale disasters to collect and share data. These mobile cloud servers convey data among isolated shelters while traveling and finally returning to the disaster relief headquarters. Vehicles exchange data while waiting in the disaster relief headquarters, which is connected to the Internet.
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2782962104", "2607377528" ], "abstract": [ "During large-scale disasters, such as the Great East Japan Earthquake in 2011 or Kumamoto huge Earthquake in 2016, many regions were isolated from critical information exchanges due to problems with communication infrastructures. In those serious disasters, quick and flexible disaster recovery network is required to deliver the disaster related information after disaster. In this paper, mobile cloud computing for vehicle server for information exchange among isolated shelters in such cases is introduced. The vehicle with mobile cloud server traverses the isolated shelters and exchanges information and returns to the disaster headquarter which is connected to Internet. DTN function is introduced to store, carry and exchange message as a message ferry among the shelters even in the challenged network environment where wired and wireless communication means are completely damaged. The prototype system is constructed using Wi-Fi network as mobility network and a note PC mobile cloud server and IBR-DTN and DTN2 software as the DTN function.", "Sensors in future smart cities will continuously monitor the environment in order to prevent critical situations and waste of resources or to offer new services to end users. Likely, the existing networks will not be able to sustain such a traffic without huge investments in the telecommunication infrastructure. One possible solution to overcome these problems is to apply the Delay Tolerant Network (DTN) paradigm. This paper presents the Sink and Delay Aware Bus (S&DA-Bus) routing protocol, a DTN routing protocol designed for smart cities able to exploit mobility of people, vehicles and buses roaming around the city. Particular attention is put on the public transportation system: S&DA-Bus takes advantage of the predictable and quasi-periodic mobility that characterizes it." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_18 conduct a study on using taxi cabs as oblivious data mules for data collection and delivery in smart cities. They have no guarantee on data communications since they are using taxi cabs without any selection criteria. They use real taxi traces in the city of Rome and divide the city into blocks of size @math meter @math . Depending only on opportunistic connections between vehicles and nodes, the authors claim achieving a coverage of 80 The aforementioned papers mostly utilize multiple relays for transferring data between source-destination locations. Furthermore, these papers do not approach the ferry selection problem from an online perspective. Conversely, in this paper we propose an approach where each vehicle transfers a data bundle from source to destination without having to use relays and decisions are made in an online fashion---these assumptions are practical as more vehicles utilize OBU and GPS units that provide exact or probabilistic information about the path of the vehicle. Additionally, this paper considers online hiring algorithms for data ferry selection.
{ "cite_N": [ "@cite_18" ], "mid": [ "2254736503" ], "abstract": [ "Abstract How to deliver data to, or collect data from the hundreds of thousands of sensors and actuators integrated in “things” spread across virtually every smart city streets (garbage cans, storm drains, advertising panels, etc.)? The answer to the question is neither straightforward nor unique, given the scale of the issue, the lack of a single administrative entity for such tiny devices (arguably run by a multiplicity of distinct and independent service providers), and the cost and power concerns that their direct connectivity to the cellular network might pose. This paper posits that one possible alternative consists in connecting such devices to their data collection gateways using “oblivious data mules”, namely transport fleets such as taxi cabs which (unlike most data mules considered in past work) have no relation whatsoever with the smart city service providers, nor are required to follow any pre-established or optimized path, nor are willing to share their LTE connectivity. We experimentally evaluate data collection and delivery performance using real world traces gathered over a six month period in the city of Rome. Results suggest that even relatively small fleets, such as an average of about 120 vehicles, operating in parallel in a very large and irregular city such as Rome, can achieve an 80 coverage of the downtown area in less than 24 h." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
It is well known that transferring learned information to a new task as an auxiliary information enables efficient learning of a new task @cite_21 , while providing acquired information from a wider network to a thinner network improves the performance of the thinner network @cite_3 .
{ "cite_N": [ "@cite_21", "@cite_3" ], "mid": [ "2165698076", "1690739335" ], "abstract": [ "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Auxiliary information from the input data also improves the performance. In the stage-wise learning, coarse to finer images, which are subsampled from the original images, are fed to the network step by step to enhance the learning process @cite_22 . The ROCK architecture introduces an auxiliary block which can perform multiple tasks of extracting useful information from the input and inserting it to the input for a main task @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2891303672", "2280728065" ], "abstract": [ "Multi-Task Learning (MTL) is appealing for deep learning regularization. In this paper, we tackle a specific MTL context denoted as primary MTL, where the ultimate goal is to improve the performance of a given primary task by leveraging several other auxiliary tasks. Our main methodological contribution is to introduce ROCK, a new generic multi-modal fusion block for deep learning tailored to the primary MTL context. ROCK architecture is based on a residual connection, which makes forward prediction explicitly impacted by the intermediate auxiliary representations. The auxiliary predictor's architecture is also specifically designed to our primary MTL context, by incorporating intensive pooling operators for maximizing complementarity of intermediate representations. Extensive experiments on NYUv2 dataset (object detection with scene classification, depth prediction, and surface normal estimation as auxiliary tasks) validate the relevance of the approach and its superiority to flat MTL approaches. Our method outperforms state-of-the-art object detection models on NYUv2 by a large margin, and is also able to handle large-scale heterogeneous inputs (real and synthetic images) and missing annotation modalities.", "Deep neural networks currently stand at the state of the art for many machine learning applications, yet there still remain limitations in the training of such networks because of their very high parameter dimensionality. In this paper we show that network training performance can be improved using a stage-wise learning strategy, in which the learning process is broken down into a number of related sub-tasks that are completed stage-bystage. The idea is to inject the information to the network gradually so that in the early stages of training the -scale\" properties of the data are captured while the \" characteristics are learned in later stages. Moreover, the solution found in each stage serves as a prior to the next stage, which produces a regularization eect and enhances the generalization of the learned representations. We show that decoupling the classier layer from the feature extraction layers of the network is necessary, as it alleviates the diusion of gradient and over-tting problems. Experimental results in the context of image classication support these claims." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
There have been proposed numerous approaches to utilize hierarchical class information as well. connect multi-layer perceptrons (MLPs) and let each MLP sequentially learn a hierarchical class as rear layer takes the output of the preceding layer as its input. insert coarse category component and fine category component after a shared layer. Classes are classified into K-coarse categories, and K-fine category components are targeted at each coarse category. In @cite_9 , CNN learns label generated by maximum margin clustering at root node, and images in the same cluster are classified at leaf node.
{ "cite_N": [ "@cite_9" ], "mid": [ "2905563945" ], "abstract": [ "The availability of large-scale annotated data and the uneven separability of different data categories have become two major impediments of deep learning for image classification. In this paper, we present a semi-supervised hierarchical convolutional neural network (SS-HCNN) to address these two challenges. A large-scale unsupervised maximum margin clustering technique is designed, which splits images into a number of hierarchical clusters iteratively to learn cluster-level CNNs at parent nodes and category-level CNNs at leaf nodes. The splitting uses the similarity of CNN features to group visually similar images into the same cluster, which relieves the uneven data separability constraint. With the hierarchical cluster-level CNNs capturing certain high-level image category information, the category-level CNNs can be trained with a small amount of labeled images, and this relieves the data annotation constraint. A novel cluster splitting criterion is also designed, which automatically terminates the image clustering in the tree hierarchy. The proposed SS-HCNN has been evaluated on the CIFAR-100 and ImageNet classification datasets. The experiments show that the SS-HCNN trained using a portion of labeled training images can achieve comparable performance with other fully trained CNNs using all labeled images. Additionally, the SS-HCNN trained using all labeled images clearly outperforms other fully trained CNNs." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
B-CNN learns from coarse features to fine features by calculating loss between superclasses and outputs from the branches of the architecture @cite_2 , where the loss of B-CNN is the weighted sum of all losses over branches. In @cite_4 , an ultrametric tree is proposed based on semantic meaning of all classes to use hierarchical class information. The probability of each node of the ultrametric tree is the sum of the probabilities of leaves (which has a path from the leaves to the node) and all nodes on the path from the leaves to the node.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2753276755", "2756815061" ], "abstract": [ "Failing to distinguish between a sheepdog and a skyscraper should be worse and penalized more than failing to distinguish between a sheepdog and a poodle; after all, sheepdogs and poodles are both breeds of dogs. However, existing metrics of failure (so-called \"loss\" or \"win\") used in textual or visual classification recognition via neural networks seldom view a sheepdog as more similar to a poodle than to a skyscraper. We define a metric that, inter alia, can penalize failure to distinguish between a sheepdog and a skyscraper more than failure to distinguish between a sheepdog and a poodle. Unlike previously employed possibilities, this metric is based on an ultrametric tree associated with any given tree organization into a semantically meaningful hierarchy of a classifier's classes.", "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Furthermore, auxiliary inputs are used to check logical reasoning in @cite_20 . Auxiliary inputs based on human knowledge are provided to the network to let the network learn logical reasoning. The network verifies the logical information with the auxiliary inputs first and proceeds to the next stage.
{ "cite_N": [ "@cite_20" ], "mid": [ "2809918697" ], "abstract": [ "This paper describes a neural network design using auxiliary inputs, namely the indicators, that act as the hints to explain the predicted outcome through logical reasoning, mimicking the human behavior of deductive reasoning. Besides the original network input and output, we add an auxiliary input that reflects the specific logic of the data to formulate a reasoning process for cross-validation. We found that one can design either meaningful indicators, or even meaningless ones, when using such auxiliary inputs, upon which one can use as the basis of reasoning to explain the predicted outputs. As a result, one can formulate different reasonings to explain the predicted results by designing different sets of auxiliary inputs without the loss of trustworthiness of the outcome. This is similar to human explanation process where one can explain the same observation from different perspectives with reasons. We demonstrate our network concept by using the MNIST data with different sets of auxiliary inputs, where a series of design guidelines are concluded. Later, we validated our results by using a set of images taken from a robotic grasping platform. We found that our network enhanced the last 1-2 of the prediction accuracy while eliminating questionable predictions with self-conflicting logics. Future application of our network with auxiliary inputs can be applied to robotic detection problems such as autonomous object grasping, where the logical reasoning can be introduced to optimize robotic learning." ] }
1906.00928
2947601766
We consider the problem of learning a causal graph in the presence of measurement error. This setting is for example common in genomics, where gene expression is corrupted through the measurement process. We develop a provably consistent procedure for estimating the causal structure in a linear Gaussian structural equation model from corrupted observations on its nodes, under a variety of measurement error models. We provide an estimator based on the method-of-moments, which can be used in conjunction with constraint-based causal structure discovery algorithms. We prove asymptotic consistency of the procedure and also discuss finite-sample considerations. We demonstrate our method's performance through simulations and on real data, where we recover the underlying gene regulatory network from zero-inflated single-cell RNA-seq data.
In the presence of latent variables, identifiability is further weakened (only the so-called PAG is identifiable) and various algorithms have been developed for learning a PAG @cite_13 @cite_22 @cite_4 @cite_19 . However, these algorithms cannot estimate causal relations among the latent variables, which is our problem of interest. @cite_28 study identifiability of directed Gaussian graphical models in the presence of a single latent variable. @cite_6 , @cite_21 , @cite_26 and @cite_23 all consider the problem of learning causal edges among latent variables from the observed variables, i.e. models as in Figure a or generalizations thereof, but under assumptions that may not hold for our applications of interest, namely that the measurement error is independent of the latent variables @cite_6 , that the observed variables are a linear function of the latent variables @cite_21 , that the observed variables are binary @cite_26 , or that each latent variable is non-Gaussian with sufficient outgoing edges to guarantee identifiability @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_28", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_13" ], "mid": [ "2260672489", "", "2394978111", "2963254467", "2137099275", "2626207843", "2134652049", "1505105018", "2163687466" ], "abstract": [ "We present a semi-supervised learning algorithm for learning discrete factor analysis models with arbitrary structure on the latent variables. Our algorithm assumes that every latent variable has an \"anchor\", an observed variable with only that latent variable as its parent. Given such anchors, we show that it is possible to consistently recover moments of the latent variables and use these moments to learn complete models. We also introduce a new technique for improving the robustness of method-of-moment algorithms by optimizing over the marginal polytope or its relaxations. We evaluate our algorithm using two real-world tasks, tag prediction on questions from the Stack Overflow website and medical diagnosis in an emergency department.", "", "", "We study parameter identifiability of directed Gaussian graphical models with one latent variable. In the scenario we consider, the latent vari- able is a confounder that forms a source node of the graph and is a parent to all other nodes, which correspond to the observed variables. We give a graphical condition that is sufficient for the Jacobian matrix of the parametrization map to be full rank, which entails that the parametrization is generically finite-to- one, a fact that is sometimes also referred to as local identifiability. We also derive a graphical condition that is necessary for such identifiability. Finally, we give a condition under which generic parameter identifiability can be deter- mined from identifiability of a model associated with a subgraph. The power of these criteria is assessed via an exhaustive algebraic computational study on models with 4, 5, and 6 observable variables.", "We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.", "Measurement error in the observed values of the variables can greatly change the output of various causal discovery methods. This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance. In this paper, we study precise sufficient identifiability conditions for the measurement-error-free causal model and show what information of the causal model can be recovered from observed data. In particular, we present two different sets of identifiability conditions, based on the second-order statistics and higher-order statistics of the data, respectively. The former was inspired by the relationship between the generating model of the measurement-error-contaminated data and the factor analysis model, and the latter makes use of the identifiability result of the over-complete independent component analysis problem.", "Causal discovery becomes especially challenging when the possibility of latent confounding and or selection bias is not assumed away. For this task, ancestral graph models are particularly useful in that they can represent the presence of latent confounding and selection effect, without explicitly invoking unobserved variables. Based on the machinery of ancestral graphs, there is a provably sound causal discovery algorithm, known as the FCI algorithm, that allows the possibility of latent confounders and selection bias. However, the orientation rules used in the algorithm are not complete. In this paper, we provide additional orientation rules, augmented by which the FCI algorithm is shown to be complete, in the sense that it can, under standard assumptions, discover all aspects of the causal structure that are uniquely determined by facts of probabilistic dependence and independence. The result is useful for developing any causal discovery and reasoning system based on ancestral graph models.", "This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from low-order observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multi-level DAGs, DAGs with effective depth one, as well as certain families of polytrees.", "We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
With the well-known attacks proposed in the literature @cite_8 , the bar of effort required for launching new attacks has lowered since the same canned attacks can be used by others. Although Sommer and Paxson @cite_14 were probably right in 2010 to downplay the potential of security attacks on ML saying exploiting the specifics of a machine learning implementation requires significant effort, time, and expertise on the attacker's side,'' the danger is real now when an attack can be launched on ML-based implementations with minimal effort, time, and expertise.
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "1985987493", "2773446523" ], "abstract": [ "In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational \"real world\" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection.", "Abstract Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
All classification schemes depicted in the taxonomy are directly related to the intent goal of the adversary. Most of the existing adversarial ML attacks are white-box attacks, which are later converted to black-box attacks by exploiting the transferability property of adversarial examples @cite_7 . The transferability property of adversarial ML means that adversarial perturbations generated for one ML model will often mislead other unseen ML models. Related research has been carried out on adversarial pattern recognition for more than a decade, and even before that there was a smattering of works focused on performing ML in the presence of malicious errors @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "1673923490", "2773446523" ], "abstract": [ "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Abstract Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms." ] }
1906.00860
2947878631
We prove the linear stability of slowly rotating Kerr black holes as solutions of the Einstein vacuum equation: linearized perturbations of a Kerr metric decay at an inverse polynomial rate to a linearized Kerr metric plus a pure gauge term. We work in a natural wave map DeTurck gauge and show that the pure gauge term can be taken to lie in a fixed 7-dimensional space with a simple geometric interpretation. Our proof rests on a robust general framework, based on recent advances in microlocal analysis and non-elliptic Fredholm theory, for the analysis of resolvents of operators on asymptotically flat spaces. With the mode stability of the Schwarzschild metric as well as of certain scalar and 1-form wave operators on the Schwarzschild spacetime as an input, we establish the linear stability of slowly rotating Kerr black holes using perturbative arguments; in particular, our proof does not make any use of special algebraic properties of the Kerr metric. The heart of the paper is a detailed description of the resolvent of the linearization of a suitable hyperbolic gauge-fixed Einstein operator at low energies. As in previous work by the second and third authors on the nonlinear stability of cosmological black holes, constraint damping plays an important role. Here, it eliminates certain pathological generalized zero energy states; it also ensures that solutions of our hyperbolic formulation of the linearized Einstein equation have the stated asymptotics and decay for general initial data and forcing terms, which is a useful feature in nonlinear and numerical applications.
In the algebraically more complicated but analytically less degenerate context of cosmological black holes, we recall that S 'a Barreto--Zworski @cite_113 studied the distribution of resonances of SdS black holes; exponential decay of linear scalar waves to constants was proved by Bony--H "afner @cite_36 and Melrose--S a Barreto--Vasy @cite_112 on SdS and by Dyatlov @cite_70 @cite_31 on KdS spacetimes, and substantially refined by Dyatlov @cite_73 to a full resonance expansion. (See @cite_56 for a physical space approach giving superpolynomial energy decay.) Tensor-valued and nonlinear equations on KdS spacetimes were studied in a series of works by Hintz--Vasy @cite_63 @cite_24 @cite_102 @cite_116 @cite_28 . For a physical space approach to resonances, see Warnick @cite_110 , and for the Maxwell equation on SdS spacetimes, see Keller @cite_66 .
{ "cite_N": [ "@cite_36", "@cite_70", "@cite_28", "@cite_112", "@cite_102", "@cite_113", "@cite_56", "@cite_24", "@cite_116", "@cite_63", "@cite_110", "@cite_31", "@cite_73", "@cite_66" ], "mid": [ "1965595223", "2018414839", "", "1752701691", "2964289027", "", "1868264628", "", "2964199620", "1659437492", "", "2963880204", "2083456890", "2722540036" ], "abstract": [ "We describe an expansion of the solution of the wave equation on the De Sitter–Schwarzschild metric in terms of resonances. The principal term in the expansion is due to a resonance at 0. The error term decays polynomially if we permit a logarithmic derivative loss in the angular directions and exponentially if we permit an ( ) derivative loss in the angular directions.", "We provide a rigorous definition of quasi-normal modes for a rotating black hole. They are given by the poles of a certain meromorphic family of operators and agree with the heuristic definition in the physics literature. If the black hole rotates slowly enough, we show that these poles form a discrete subset of ( C ) . As an application we prove that the local energy of linear waves in that background decays exponentially once orthogonality to the zero resonance is imposed.", "", "Solutions to the wave equation on de Sitter-Schwarzschild space with smooth initial data on a Cauchy surface are shown to decay exponentially to a constant at temporal infinity, with corresponding uniform decay on the appropriately compactified space.", "We study asymptotics for solutions of Maxwell’s equations, in fact of the Hodge-de Rham equation (d+ δ)u = 0 without restriction on the form degree, on a geometric class of stationary spacetimes with a warped product type structure (without any symmetry assumptions), which in particular include Schwarzschild-de Sitter spaces of all spacetime dimensions n ≥ 4. We prove that solutions decay exponentially to 0 or to stationary states in every form degree, and give an interpretation of the stationary states in terms of cohomological information of the spacetime. We also study the wave equation on differential forms and in particular prove analogous results on Schwarzschildde Sitter spacetimes. We demonstrate the stability of our analysis and deduce asymptotics and decay for solutions of Maxwell’s equations, the Hodge-de Rham equation and the wave equation on differential forms on Kerr-de Sitter spacetimes with small angular momentum.", "", "We consider solutions to the linear wave equation @math on a non-extremal maximally extended Schwarzschild-de Sitter spacetime arising from arbitrary smooth initial data prescribed on an arbitrary Cauchy hypersurface. (In particular, no symmetry is assumed on initial data, and the support of the solutions may contain the sphere of bifurcation of the black white hole horizons and the cosmological horizons.) We prove that in the region bounded by a set of black white hole horizons and cosmological horizons, solutions @math converge pointwise to a constant faster than any given polynomial rate, where the decay is measured with respect to natural future-directed advanced and retarded time coordinates. We also give such uniform decay bounds for the energy associated to the Killing field as well as for the energy measured by local observers crossing the event horizon. The results in particular include decay rates along the horizons themselves. Finally, we discuss the relation of these results to previous heuristic analysis of Price and", "", "", "We consider quasilinear wave equations on manifolds for which infinity has a structure generalizing that of Kerr-de Sitter space; in particular the trapped geodesics form a normally hyperbolic invariant manifold. We prove the global existence and decay, to constants for the actual wave equation, of solutions. The key new ingredient compared to earlier work by the authors in the semilinear case [33] and by the first author in the non-trapping quasilinear case [30] is the use of the Nash-Moser iteration in our framework.", "", "", "We establish a Bohr–Sommerfeld type condition for quasi-normal modes of a slowly rotating Kerr–de Sitter black hole, providing their full asymptotic description in any strip of fixed width. In particular, we observe a Zeeman-like splitting of the high multiplicity modes at a = 0 (Schwarzschild–de Sitter), once spherical symmetry is broken. The numerical results presented in Appendix B show that the asymptotics are in fact accurate at very low energies and agree with the numerical results established by other methods in the physics literature. We also prove that solutions of the wave equation can be asymptotically expanded in terms of quasi-normal modes; this confirms the validity of the interpretation of their real parts as frequencies of oscillations, and imaginary parts as decay rates of gravitational waves.", "In this work, we consider solutions of the Maxwell equations on the Schwarzschild-de Sitter family of black hole spacetimes. We prove that, in the static region bounded by black hole and cosmological horizons, solutions of the Maxwell equations decay to stationary Coulomb solutions at a super-polynomial rate, with decay measured according to ingoing and outgoing null coordinates. Our method employs a differential transformation of Maxwell tensor components to obtain higher-order quantities satisfying a Fackerell-Ipser equation, in the style of Chandrasekhar and the more recent work of Pasqualotto. The analysis of the Fackerell-Ipser equation is accomplished by means of the vector field method, with decay estimates for the higher-order quantities leading to decay estimates for components of the Maxwell tensor." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
There has been a significant volume of research work on algorithmic and learning problems related to our work. In the , a finite set @math of rankings is given, and we want to compute the ranking @math . This problem is known to be NP-hard , but it admits a polynomial-time @math -approximation algorithm problem and a PTAS . When the rankings are i.i.d. samples from a Mallows distribution, consensus ranking is equivalent to computing the maximum likelihood ranking, which does not depend on the spread parameter. Intuitively, the problem of finding the central ranking should not be hard, if the probability mass is concentrated around the central ranking. @cite_8 came up with a branch and bound technique which relies on this observation. @cite_9 proposed a dynamic programming approach that computes the consensus ranking efficiently, under the Mallows model. @cite_10 showed that the central ranking can be recovered from a logarithmic number of i.i.d. samples from a Mallows distribution (see also Theorem ).
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_8" ], "mid": [ "1584796555", "2012215691", "2113815377" ], "abstract": [ "This paper studies problems of inferring order given noisy information. In these problems there is an unknown order (permutation) @math on @math elements denoted by @math . We assume that information is generated in a way correlated with @math . The goal is to find a maximum likelihood @math given the information observed. We will consider two different types of observations: noisy comparisons and noisy orders. The data in Noisy orders are permutations given from an exponential distribution correlated with (this is also called the Mallow's model). The data in Noisy Comparisons is a signal given for each pair of elements which is correlated with their true ordering. In this paper we present polynomial time algorithms for solving both problems with high probability. As part of our proof we show that for both models the maximum likelihood solution @math is close to the original permutation @math . Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.", "A well-studied approach to the design of voting rules views them as maximum likelihood estimators; given votes that are seen as noisy estimates of a true ranking of the alternatives, the rule must reconstruct the most likely true ranking. We argue that this is too stringent a requirement, and instead ask: How many votes does a voting rule need to reconstruct the true ranking? We define the family of pairwise-majority consistent rules, and show that for all rules in this family the number of samples required from the Mallows noise model is logarithmic in the number of alternatives, and that no rule can do asymptotically better (while some rules like plurality do much worse). Taking a more normative point of view, we consider voting rules that surely return the true ranking as the number of samples tends to infinity (we call this property accuracy in the limit); this allows us to move to a higher level of abstraction. We study families of noise models that are parametrized by distance functions, and find voting rules that are accurate in the limit for all noise models in such general families. We characterize the distance functions that induce noise models for which pairwise-majority consistent rules are accurate in the limit, and provide a similar result for another novel family of position-dominance consistent rules. These characterizations capture three well-known distance functions.", "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
@cite_5 considered learning the spread parameter of a Mallows model based on a single sample, assuming that the central ranking is known. He studied the asymptotic behavior of his estimator and proved consistency. We strengthen this result by showing that our parameter estimator, based on single sample, can achieve optimal error for Mallows Block model (Corollary ).
{ "cite_N": [ "@cite_5" ], "mid": [ "1511376624" ], "abstract": [ "Asymptotics of the normalizing constant is computed for a class of one parameter exponential families on permutations which includes Mallows model with Spearmans's Footrule and Spearman's Rank Correlation Statistic. The MLE, and a computable approximation of the MLE are shown to be consistent. The pseudo-likelihood estimator of Besag is shown to be @math -consistent. An iterative algorithm (IPFP) is proved to converge to the limiting normalizing constant. The Mallows model with Kendall's Tau is also analyzed to demonstrate flexibility of the tools of this paper." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
The parameter estimation of the Generalized Mallows Model has been examined from a practical point of view by @cite_7 but no theoretical guarantees for the sample complexity have been provided. Several ranking models are routinely used in analyzing ranking data , such as Plackett-Luce model , Babington-Smith model and spectral analysis based methods and non-parametric methods . However, to our best knowledge, none of these ranking methods have been analyzed from point of distribution learning which comes with guarantee on some information theoretic distance. considered the problem of learning parameters of Plackett-Luce model and they came up with high probability bounds for their estimator that is tight in a sense that there is no algorithm which can achieve lower estimation error with fewer examples.
{ "cite_N": [ "@cite_7" ], "mid": [ "43928053" ], "abstract": [ "SUMMARY A probability distribution is defined over the r! permutations of r objects in such a way as to incorporate up to r! -1 parameters. Problems of estimation and testing are considered. The results are applied to data on voting at elections and beanstores." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
Promoted by the advancements in the flying control and communication technologies, both industry and academia are devoting many efforts to exploit the full potential of DA-RAN @cite_11 . As the foundation for drone communication and DA-RAN research, Al-Hourani . built the D2U pathloss model for DBS according to abundant field test data in various scenarios @cite_23 . A close-form expression of D2U pathloss model suiting different scenarios is proposed in which the probabilities of both LoS and NLoS D2U links are considered. As the extension work, they further formulated the pathloss model for D2B communication in suburban scenario @cite_8 where the D2B links are dominated by LoS links. Leveraging the pathloss model in @cite_23 and @cite_8 , various studies have emerged in both static DBS deployment and DBS trajectory planning.
{ "cite_N": [ "@cite_8", "@cite_23", "@cite_11" ], "mid": [ "2758233612", "2031834036", "2962691117" ], "abstract": [ "Operating unmanned aerial vehicle (UAV) over cellular networks would open the barriers of remote navigation and far-flung flying by combining the benefits of UAVs and the ubiquitous availability of cellular networks. In this letter, we provide an initial insight on the radio propagation characteristics of cellular-to-UAV (CtU) channel. In particular, we model the statistical behavior of the path-loss from a cellular base station toward a flying UAV. Where we report the value of the path-loss as a function of the depression angle and the terrestrial coverage beneath the UAV. The provided model is derived based on extensive experimental data measurements conducted in a typical suburban environment for both terrestrial (by drive test) and aerial coverage (using a UAV). The model provides simple and accurate prediction of CtU path-loss that can be useful for both researchers and network operators alike.", "Low-altitude aerial platforms (LAPs) have recently gained significant popularity as key enablers for rapid deployable relief networks where coverage is provided by onboard radio heads. These platforms are capable of delivering essential wireless communication for public safety agencies in remote areas or during the aftermath of natural disasters. In this letter, we present an analytical approach to optimizing the altitude of such platforms to provide maximum radio coverage on the ground. Our analysis shows that the optimal altitude is a function of the maximum allowed pathloss and of the statistical parameters of the urban environment, as defined by the International Telecommunication Union. Furthermore, we present a closed-form formula for predicting the probability of the geometrical line of sight between a LAP and a ground receiver.", "The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. On the other hand, UAVs can operate as flying mobile terminals within a cellular network. Such cellular-connected UAVs can enable several applications ranging from real-time video streaming to item delivery. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as 3D deployment, performance analysis, channel modeling, and energy efficiency are explored along with representative results. Then, open problems and potential research directions pertaining to UAV communications are introduced. Finally, various analytical frameworks and mathematical tools, such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
In most static DBS deployment works, the terrestrial user QoS or network performance is improved through optimizing the hovering position of single or multiple DBSs. For instance, through a clustering based approach, Mozaffari . designed the optimal locations of DBSs that maximize the information collection gain from terrestrial IoT devices @cite_13 . In @cite_28 , Zhang . optimized the DBS density in DBS network to maximize the network throughput while satisfying the efficiency requirements of the cellular network. Zhou . studied the downlink coverage features of DBS using Nakagami-m fading models, and calculated the optimal height and density of multiple DBSs to achieve maximal coverage probability @cite_16 . Although various works have investigated the static DBS deployments in different scenarios with different methods, the D2B link quality constraint is simplified or ignored by most works. In the works considering the D2B links, the D2B channel models are either as same as the D2U pathloss model @cite_24 or traditional terrestrial channel models @cite_10 . In this paper, we further implement the specific D2B channel model derived in @cite_8 to highlight the D2B channel features.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_28", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2604830243", "2758233612", "2558339943", "2962968784", "2802614201", "2963533607" ], "abstract": [ "In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, are investigated. In particular, to enable reliable uplink communications for the IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the 3D placement and the mobility of the UAVs, device-UAV association, and uplink power control. First, given the locations of active IoT devices at each time instant, the optimal UAVs’ locations and associations are determined. Next, to dynamically serve the IoT devices in a time-varying network, the optimal mobility patterns of the UAVs are analyzed. To this end, based on the activation process of the IoT devices, the time instances at which the UAVs must update their locations are derived. Moreover, the optimal 3D trajectory of each UAV is obtained in a way that the total energy used for the mobility of the UAVs is minimized while serving the IoT devices. Simulation results show that, using the proposed approach, the total-transmit power of the IoT devices is reduced by 45 compared with a case, in which stationary aerial base stations are deployed. In addition, the proposed approach can yield a maximum of 28 enhanced system reliability compared with the stationary case. The results also reveal an inherent tradeoff between the number of update times, the mobility of the UAVs, and the transmit power of the IoT devices. In essence, a higher number of updates can lead to lower transmit powers for the IoT devices at the cost of an increased mobility for the UAVs.", "Operating unmanned aerial vehicle (UAV) over cellular networks would open the barriers of remote navigation and far-flung flying by combining the benefits of UAVs and the ubiquitous availability of cellular networks. In this letter, we provide an initial insight on the radio propagation characteristics of cellular-to-UAV (CtU) channel. In particular, we model the statistical behavior of the path-loss from a cellular base station toward a flying UAV. Where we report the value of the path-loss as a function of the depression angle and the terrestrial coverage beneath the UAV. The provided model is derived based on extensive experimental data measurements conducted in a typical suburban environment for both terrestrial (by drive test) and aerial coverage (using a UAV). The model provides simple and accurate prediction of CtU path-loss that can be useful for both researchers and network operators alike.", "In this paper, we study spectrum sharing of drone small cells (DSCs) network modeled by the 3-D Poisson point process. This paper also investigates an underlay spectrum sharing between the 3-D DSCs network and traditional cellular networks modeled by 2-D Poisson point processes. We take advantage of the tractability of the Poisson point process to derive the explicit expressions for the DSCs coverage probability and achievable throughput. To maximize the DSCs network throughput while satisfying the cellular network efficiency constraint, we find the optimal density of DSCs aerial base stations. Furthermore, we explore the scaling behavior of the optimal DSCs density with respect to the DSCs outage probability constraint under different heights of DSCs. Our analytical and numerical results show that the maximum throughput of the DSCs user increases almost linearly with the increase of the DSCs outage constraint. In order to protect the cellular user, the throughput of the DSCs user stops increasing when it meets the cellular network efficiency loss constraint. To further protect the cellular network in the spectrum underlay, we investigate the effect of primary exclusive regions (PERs) in a 3-D space. Unlike the circular PER in traditional cellular spectrum sharing in the 2-D space, the shape of the 3-D PER is found as a half sphere or a half sphere segment, depending on the radius of PER and the DSCs height limit. We show that the radius of PER should be restricted for small DSCs constraints and limited DSCs height.", "The densification of small cell base stations in a 5G architecture is a promising approach to enhance the coverage area and facilitate the ever increasing capacity demand of end users. However, the bottleneck is an intelligent management of a backhaul fronthaul network for these small cell base stations. This involves efficient association and placement of the backhaul hubs that connects these small-cells with the core network. Terrestrial hubs suffer from an inefficient non line of sight link limitations and unavailability of a proper infrastructure in an urban area. Realizing the popularity of flying platforms, we employ here an idea of using networked flying platform (NFP) such as unmanned aerial vehicles (UAVs), drones, unmanned balloons flying at different altitudes, as aerial backhaul hubs. The association problem of these NFP-hubs and small- cell base stations is formulated considering backhaul link and NFP related limitations such as maximum number of supported links and bandwidth. We then present an efficient and distributed solution of the designed problem, which performs a greedy search in order to maximize the sum rate of the overall network. A favorable performance is observed via a numerical comparison of our proposed method with optimal exhaustive search algorithm in terms of sum rate and run-time speed.", "In this paper, we study coverage probabilities of the UAV-assisted cellular network modeled by 2-dimension (2D) Poisson point process. The cellular user is assumed to connect to the nearest aerial base station. We derive the explicit expressions for the downlink coverage probability for the Rayleigh fading channel. Furthermore, we explore the behavior of performance when taking the property of air-to-ground channel into consideration. Our analytical and numerical results show that the coverage probability is affected by UAV height, pathloss exponent and UAV density. To maximize the coverage probability, the optimal height and density of UAVs are studied, which could be beneficial for the UAV deployment design.", "We introduce the concept of using unmanned aerial vehicles (UAVs) as drone base stations for in-band Integrated Access and Backhaul (IB-IAB) scenarios for 5G networks. We first present a system model for forward link transmissions in an IB-IAB multi- tier drone cellular network. We then investigate the key challenges of this scenario and propose a framework that utilizes the flying capabilities of the UAVs as the main degree of freedom to find the optimal precoder design for the backhaul links, user-base station association, UAV 3D hovering locations, and power allocations. We discuss how the proposed algorithm can be utilized to optimize the network performance in both large and small scales. Finally, we use an exhaustive search-based solution to demonstrate the performance gains that can be achieved from the presented algorithm in terms of the received signal to interference plus noise ratio (SINR) and overall network sum-rate." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Action recognition has been a challenging topic for long and a lot of innovative approaches, mainly for the task of action classification @cite_30 @cite_26 @cite_32 , have come up in the research community. But, obviously, we are still far away from the real-world task of learning arbitrary action classes from video data. One limitation here might be the lack of availability of real-world datasets that are just based on real random collections of videos.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_32" ], "mid": [ "2142194269", "2105101328", "2156303437" ], "abstract": [ "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Apart from first generation datasets @cite_2 @cite_23 where actors were required to perform certain actions in controlled environment, current datasets such as HMDB @cite_22 , UCF @cite_27 or the recently released Kinetics dataset @cite_24 are mainly acquired from web sources such as YouTube clips or movies with the aim to represent realistic scenarios from training and testing. Here, videos were usually first searched by predefined action-queries and later clipped and organized to capture the atomic actions or its repetitions. Other datasets such as Thumos @cite_15 , MPI Cooking @cite_6 , Breakfast @cite_16 or the recently released Epic Kitchen dataset @cite_13 focus on the labeling of one or more action segments in single long videos, trying to temporally detect or segment predefined action classes within the video.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2126579184", "2019660985", "2963524571", "24089286", "2034328688", "2010399676", "", "2099614498", "" ], "abstract": [ "With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.", "While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.", "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video", "", "This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities.", "" ] }
cs0003054
2949339967
The idle computers on a local area, campus area, or even wide area network represent a significant computational resource---one that is, however, also unreliable, heterogeneous, and opportunistic. This type of resource has been used effectively for embarrassingly parallel problems but not for more tightly coupled problems. We describe an algorithm that allows branch-and-bound problems to be solved in such environments. In designing this algorithm, we faced two challenges: (1) scalability, to effectively exploit the variably sized pools of resources available, and (2) fault tolerance, to ensure the reliability of services. We achieve scalability through a fully decentralized algorithm, by using a membership protocol for managing dynamically available resources. However, this fully decentralized design makes achieving reliability even more challenging. We guarantee fault tolerance in the sense that the loss of up to all but one resource will not affect the quality of the solution. For propagating information efficiently, we use epidemic communication for both the membership protocol and the fault-tolerance mechanism. We have developed a simulation framework that allows us to evaluate design alternatives. Results obtained in this framework suggest that our techniques can execute scalably and reliably.
The only fully decentralized, fault-tolerant B &B algorithm for distributed-memory architectures is DIB (Distributed Implementation of Backtracking) @cite_0 . DIB was designed for a wide range of tree-based applications, such as recursive backtrack, branch-and-bound, and alpha-beta pruning. It is a distributed, asynchronous algorithm that uses a dynamic load-balancing technique. Its failure recovery mechanism is based on keeping track of which machine is responsible for each unsolved problem. Each machine memorizes the problems for which it is responsible, as well as the machines to which it sent problems or from which it received problems. The completion of a problem is reported to the machine the problem came from. Hence, each machine can determine whether the work for which it is responsible is still unsolved, and can redo that work in the case of failure.
{ "cite_N": [ "@cite_0" ], "mid": [ "1984263429" ], "abstract": [ "DIB is a general-purpose package that allows a wide range of applications such as recursive backtrack, branch and bound, and alpha-beta search to be implemented on a multicomputer. It is very easy to use. The application program needs to specify only the root of the recursion tree, the computation to be performed at each node, and how to generate children at each node. In addition, the application program may optionally specify how to synthesize values of tree nodes from their children's values and how to disseminate information (such as bounds) either globally or locally in the tree. DIB uses a distributed algorithm, transparent to the application programmer, that divides the problem into subproblems and dynamically allocates them to any number of (potentially nonhomogeneous) machines. This algorithm requires only minimal support from the distributed operating system. DIB can recover from failures of machines even if they are not detected. DIB currently runs on the Crystal multicomputer at the University of Wisconsin-Madison. Many applications have been implemented quite easily, including exhaustive traversal ( N queens, knight's tour, negamax tree evaluation), branch and bound (traveling salesman) and alpha-beta search (the game of NIM). Speedup is excellent for exhaustive traversal and quite good for branch and bound." ] }
cs0003008
1524644832
This paper presents a method of computing a revision of a function-free normal logic program. If an added rule is inconsistent with a program, that is, if it leads to a situation such that no stable model exists for a new program, then deletion and addition of rules are performed to avoid inconsistency. We specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules. To compute such deletion and addition, we propose an adaptation of our top-down abductive proof procedure to compute a relevant abducibles to an added rule. We compute a minimally revised program, by choosing a minimal set of abducibles among all the sets of abducibles computed by a top-down proof procedure.
There are many procedures to compute stable models, generalized stable models or abduction. If we use a bottom-up procedure for our translated abductive logic program to compute all the generalized stable models naively, then sets of abducibles to be compared would be larger since abducibles of irrelevant temporary rules and addable rules with inconsistency will be considered. Therefore, it is better to compute abducibles related with inconsistency. To our knowledge, top-down procedure which can be used for this purpose is only Satoh and Iwayama's procedure since we need a bottom-up consistency checking of addition deletion of literals during computing abducibles for revision. This task is similar to integrity constraint checking in @cite_16 and Satoh and Iwayama's procedure includes this task.
{ "cite_N": [ "@cite_16" ], "mid": [ "190622319" ], "abstract": [ "Abstract We propose an extension of the SLDNF proof procedure for checking integrity constraints in deductive databases. To achieve the effect of the simplification methods investigated by Nicolas [1982], Lloyd, Sonenberg, and Topor [1986], and Decker [1986], we use clauses corresponding to the updates as top clauses for the search space. This builds in the assumption that the database satisfied its integrity constraints prior to the transaction, and that, therefore, any violation of the constraints in the updated database must involve at least one of the updates in the transaction. Different simplification methods can be simulated by different strategies for literal selection and search. The SLDNF proof procedure needs to be extended to use as top clause any arbitrary deductive rule, denial, or negated fact, to incorporate inference rules for reasoning about implicit deletions resulting from other deletions and additions, and to allow an extended resolution step for reasoning forward from negated facts." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Dealing with preferences on rules seems to necessitate a two-level approach. This in fact is a characteristic of many approaches found in the literature. The majority of these approaches treat preference at the meta-level by defining alternative semantics. @cite_1 proposes a modification of well-founded semantics in which dynamic preferences may be given for rules employing @math . @cite_12 and @cite_5 propose different prioritized versions of answer set semantics. In @cite_12 static preferences are addressed first, by defining the reduct of a logic program @math , which is a subset of @math that is most preferred. For the following example, their approach gives two answer sets (one with @math and one with @math ) which seems to be counter-intuitive; ours in contrast has a single answer set containing @math . Moreover, the dynamic case is addressed by specifying a transformation of a dynamic program to a set of static programs.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12" ], "mid": [ "2124627636", "1847820984", "1577410338" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "The paper describes an extension of well-founded semantics for logic programs with two types of negation. In this extension information about preferences between rules can be expressed in the logical language and derived dynamically. This is achieved by using a reserved predicate symbol and a naming technique. Conflicts among rules are resolved whenever possible on the basis of derived preference information. The well-founded conclusions of prioritized logic programs can be computed in polynomial time. A legal reasoning example illustrates the usefulness of the approach.", "" ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Brewka and Eiter @cite_5 address static preferences on rules in extended logic programs. They begin with a strict partial order on a set of rules, but define preference with respect to total orders that conform to the original partial order. Preferred answer sets are then selected from among the collection of answer sets of the (unprioritised) program. In contrast, we deal only with the original partial order, which is translated into the object theory. As well, only preferred extensions are produced in our approach; there is no need for meta-level filtering of extensions.
{ "cite_N": [ "@cite_5" ], "mid": [ "2124627636" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
A two-level approach is also found in @cite_7 , where a methodology for directly encoding preferences in logic programs is proposed. The second-order flavour'' of this approach stems from the reification of rules and preferences. For example, a rule ( p r, s, q ) is expressed by the formula ( default (n, p, [r, s], [q]) ) where @math is the name of the rule. The Prolog-like list notation @math and @math raises the possibility of an infinite Herbrand universe; this is problematic for systems like smodels and dlv that rely on finite Herbrand universes.
{ "cite_N": [ "@cite_7" ], "mid": [ "1482502297" ], "abstract": [ "The purpose of this paper is to investigate the methodology of reasoning with prioritized defaults in the language of logic programs under the answer set semantics. We present a domain independent system of axioms, written as an extended logic program, which defines reasoning with prioritized defaults. These axioms are used in conjunction with a description of a particular domain encoded in a simple language allowing representation of defaults and their priorities. Such domain descriptions are of course domain dependent and should be specified by the users. We give sufficient conditions for consistency of domain descriptions and illustrate the use of our system by formalizing various examples from the literature. Unlike many other approaches to formalizing reasoning with priorities ours does not require development of the new semantics of the language. Instead, the meaning of statements in the domain description is given by the system of (domain independent) axioms. We believe that in many cases this leads to simpler and more intuitive formalization of reasoning examples. We also present some discussion of differences between various formalizations." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
If we look in a broader context, then finding a stable model is a combinatorial search problem. Other forms of combinatorial search problems are propositional satisfiability, constraint satisfaction, constraint logic programming and integer linear programming problems, and some other logic programming problems such as those expressible in @cite_28 . The difference between these problem formalisms and the stable model semantics is that they do not include default negation. In addition, all but the last one are not nonmonotonic.
{ "cite_N": [ "@cite_28" ], "mid": [ "2051790368" ], "abstract": [ "In this paper a logic-based specification language, called NP-SPEC, is presented. The language is obtained by extending DATALOG through allowing a limited use of some second-order predicates of predefined form. NP-SPEC programs specify solutions to problems in a very abstract and concise way, and are executable. In the present prototype they are compiled to PROLOG code, which is run to construct outputs. Second-order predicates of suitable form allow to limit the size of search spaces in order to obtain reasonably efficient construction of problem solutions. NP-SPEC expressive power is precisely characterized as to express exactly the problems in the class NP. The specification of several combinatorial problems in NP-SPEC is shown, and the efficiency of the generated programs is evaluated." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
From an algorithmic standpoint the progenitor of the @math algorithm is the Davis-Putnam (-Logemann-Loveland) procedure @cite_37 for determining the satisfiability of propositional formulas. This procedure can be seen as a backtracking search procedure that makes assumptions about the truth values of the propositional atoms in a formula and that then derives new truth values from these assumptions in order to prune the search space.
{ "cite_N": [ "@cite_37" ], "mid": [ "2057361103" ], "abstract": [ "The programming of a proof procedure is discussed in connection with trial runs and possible improvements." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
While the extended rules of this work are novel, there are some analogous constructions in the literature. The choice rule can be seen as a generalization of the disjunctive rule of the possible model semantics @cite_43 . The disjunctive rule of disjunctive logic programs @cite_14 also resembles the choice rule, but the semantics is in this case different. The stable models of a disjunctive program are subset minimal while the stable models of a logic program are grounded, i.e., atoms can not justify their own inclusion. If a program contains choice rules, then a grounded model is not necessarily subset minimal.
{ "cite_N": [ "@cite_43", "@cite_14" ], "mid": [ "2050486226", "2085084839" ], "abstract": [ "In this paper, we study a new semantics of logic programming and deductive databases. Thepossible model semantics is introduced as a declarative semantics of disjunctive logic programs. The possible model semantics is an alternative theoretical framework to the classical minimal model semantics and provides a flexible inference mechanism for inferring negation in disjunctive logic programs. We also present a proof procedure for the possible model semantics and show that the possible model semantics has an advantage from the computational complexity point of view.", "We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties: • For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics. • For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics. • For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics. • The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs. • Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation. • After translation of the programP into a suitable autoepistemic theory ( P ) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of ( P ) ." ] }
math0005204
1540167525
We present some new and recent algorithmic results concerning polynomial system solving over various rings. In particular, we present some of the best recent bounds on: (a) the complexity of calculating the complex dimension of an algebraic set, (b) the height of the zero-dimensional part of an algebraic set over C, and (c) the number of connected components of a semi-algebraic set. We also present some results which significantly lower the complexity of deciding the emptiness of hypersurface intersections over C and Q, given the truth of the Generalized Riemann Hypothesis. Furthermore, we state some recent progress on the decidability of the prefixes and , quantified over the positive integers. As an application, we conclude with a result connecting Hilbert's Tenth Problem in three variables and height bounds for integral points on algebraic curves. This paper is based on three lectures presented at the conference corresponding to this proceedings volume. The titles of the lectures were Some Speed-Ups in Computational Algebraic Geometry,'' Diophantine Problems Nearly in the Polynomial Hierarchy,'' and Curves, Surfaces, and the Frontier to Undecidability.''
As for more general relations between @math and its analogue over @math , it is easy to see that the decidability of @math implies the decidability of its analogue over @math . Unfortunately, the converse is currently unknown. Via Lagrange's Theorem (that any positive integer can be written as a sum of four squares) one can easily show that the un decidability of @math implies the un decidability of the analogue of @math over @math . More recently, Zhi-Wei Sun has shown that the @math can be replaced by @math @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2159229515" ], "abstract": [ "Let ∃n denote the set of all formulas ∃x1…∃xn[P(x1, …,xn) = 0], where P is a polynomial with integer coefficients. We prove a new relation-combining theorem from which it follows that if ∃n is undecidable over N, then ∃2n+2 is undecidable over Z." ] }
cs0005026
1644526253
A one-time pad (OTP) based cipher to insure both data protection and integrity when mobile code arrives to a remote host is presented. Data protection is required when a mobile agent could retrieve confidential information that would be encrypted in untrusted nodes of the network; in this case, information management could not rely on carrying an encryption key. Data integrity is a prerequisite because mobile code must be protected against malicious hosts that, by counterfeiting or removing collected data, could cover information to the server that has sent the agent. The algorithm described in this article seems to be simple enough, so as to be easily implemented. This scheme is based on a non-interactive protocol and allows a remote host to change its own data on-the-fly and, at the same time, protecting information against handling by other hosts.
Strong foundation is a requirement for future work in the topic of mobile agents @cite_12 . To design semantics and type-safety languages for agents in untrusted networks @cite_14 and supporting permissions languages for specifying distributed processes in dynamically evolving networks, as the languages derived from the @math -calculus @cite_18 are important to protect hosts against malicious code. spoonhower:telephony have shown that agents could be used for collaborative applications reducing network bandwidth requeriments. sander:hosts have proposed a way to obtain code privacy using non-interactive evaluation of encrypted functions (EEF). hohl:mess has proposed the possibility of use algorithms to mess up'' code.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_12" ], "mid": [ "2006520458", "2019545801", "2061692729" ], "abstract": [ "We describe a foundational language for specifying dynamically evolving networks of distributed processes, D�. The language is a distributed extension of the �-calculus which incorporates the notions of remote execution, migration, and site failure. Novel features of D� include 1. Communication channels are explicitly located: the use of a channel requires knowledge of both the channel and its location. 2. Names are endowed with permissions: the holder of a name may only use that name in the manner allowed by these permissions.A type system is proposed in which-the types control the allocation of permissions; in well-typed processes all names are used in accordance with the permissions allowed by the types. We prove Subject Reduction and Type Safety Theorems for the type system. In the final section we define a semantic theory based on barbed bisimulations and discuss its characterization in terms of a bisimulation relation over a relativized labelled transition system.", "We present a partially-typed semantics for Dπ, a distributed π-calculus. The semantics is designed for mobile agents in open distributed systems in which some sites may harbor malicious intentions. Nonetheless, the semantics guarantees traditional type-safety properties at \"good\" locations by using a mixture of static and dynamic type-checking. We show how the semantics can be extended to allow trust between sites, improving performance and expressiveness without compromising type-safety.", "Almost all agent development to date has been “homegrown” [4] and done from scratch, independently, byeach development team. This has led to the followingproblems:• Lack of an agreed definition: Agents built bydifferent teams have different capabilities.• Duplication of effort: There has been little reuse ofagent architectures, designs, or components.• Inability to satisfy industrial strengthrequirements: Agents must integrate with existingsoftware and computer infrastructure. They must alsoaddress security and scaling concerns.Agents are complex and ambitious software systems thatwill be entrusted with critical applications. As such,agent based systems must be engineered with validsoftware engineering principles and not constructed in anad hoc fashion.Agent systems must have a strong foundation based onmasterful software patterns. Software patterns arose outof Alexander’s [2] work in architecture and urbanplanning. Many urban plans and architectures aregrandiose and ill-fated. Overly ambitious agent basedsystems built in an ad hoc fashion risk the same fate.They may never be built, or, due to their fragile nature,they may be built and either never used or used once andthen abandoned. A software pattern is a recurringproblem and solution; it may address conceptual,architectural or design problems.A pattern is described in a set format to ease itsdissemination. The format states the problem addressedby the pattern and the forces acting on it. There is also acontext that must be present for the pattern to be valid, astatement of the solution, and any known uses. Thefollowing sections summarize some key patterns of agentbased systems; for brevity, many of the patterns arepresented in an abbreviated “patlet” form. When kn ownuses are not listed for an individual pattern, it means thatthe pattern has arisen from the JAFIMA activity. Thepatterns presented in this paper represent progress towarda pattern language or living methodology for intelligentand mobile agents." ] }
cs0006023
2949089885
We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65 based on errorful, automatically recognized words and prosody, and 71 based on word transcripts, compared to a chance baseline accuracy of 35 and human accuracy of 84 ) and a small reduction in word recognition error.
Previous research on DA modeling has generally focused on task-oriented dialogue, with three tasks in particular garnering much of the research effort. The Map Task corpus @cite_49 @cite_12 consists of conversations between two speakers with slightly different maps of an imaginary territory. Their task is to help one speaker reproduce a route drawn only on the other speaker's map, all without being able to see each other's maps. Of the DA modeling algorithms described below, TaylorEtAl:LS98 and Wright:98 were based on Map Task. The VERBMOBIL corpus consists of two-party scheduling dialogues. A number of the DA modeling algorithms described below were developed for VERBMOBIL, including those of MastEtAl:96 , WarnkeEtAl:97 , Reithinger:96 , Reithinger:97 , and Samuel:98 . The ATR Conference corpus is a subset of a larger ATR Dialogue database consisting of simulated dialogues between a secretary and a questioner at international conferences. Researchers using this corpus include Nagata:92 , [1994] NagataMorimoto:93 , NagataMorimoto:94 and KitaEtAl:96 . Table shows the most commonly used versions of the tag sets from those three tasks.
{ "cite_N": [ "@cite_12", "@cite_49" ], "mid": [ "1784630400", "2118142207" ], "abstract": [ "The paper describes a resource for the study of spontaneous speech under stress, a corpus of 216 unscripted task-oriented dialogues conducted by normal Canadian adults in the course of a sleep deprivation experiment under 3 drug conditions. Speakers carried out the route-communication task in alternation with a battery of other tasks over a 6-day study which included a 60-hour sleepless period. Each speaker participated in 12 dialogues. The design permits comparisons within speakers for sleep deprivation (baseline, deprived, post-recovery), and between speakers for drug condition (placebo, d-amphetamine, Modafinil) and number of conversational partners encountered. Preliminary examination of dialogue length, task performance, and aspects of dialogue strategy indicate effects of all these variables. Effects of sleep-deprivation and drug condition are less severe than those found in simpler tasks.", "This paper describes a corpus of unscripted, task-oriented dialogues which has been designed, digitally recorded, and transcribed to support the study of spontaneous speech on many levels. The corpus uses the Map Task (Brown, Anderson, Yule, and Shillcock, 1983) in which speakers must collaborate verbally to reproduce on one participant's map a route printed on the other's. In all, the corpus includes four conversations from each of 64 young adults and manipulates the following variables: familiarity of speakers, eye contact between speakers, matching between landmarks on the participants' maps, opportunities for contrastive stress, and phonological characteristics of landmark names. The motivations for the design are set out and basic corpus statistics are presented." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a large body of literature dealing with verification of protocols. Verification systems typically address well-defined properties --such as safety , liveness , and responsiveness @cite_37 -- and aim to detect violations of these properties. In general, the two main approaches for protocol verification are theorem proving and reachability analysis @cite_2 . Theorem proving systems define a set of axioms and relations to prove properties, and include model-based and logic-based formalisms @cite_35 @cite_17 . These systems are useful in many applications. However, these systems tend to abstract out some network dynamics that we will study (e.g., selective packet loss). Moreover, they do not synthesize network topologies and do not address performance issues per se.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_17", "@cite_2" ], "mid": [ "2099078002", "2005307131", "", "2034717157" ], "abstract": [ "Contains a precise and complete description of the computational logic develo by the authors; will serve also as a reference guide to the associated mechanical theorem proving system. Annotation copyright Book News, Inc. Portland, Or.", "This paper addresses the problem of designing stabilizing computer communication protocols modelled by communicating finite state machines. A communication protocol is said to be stabilizing if, starting from or reaching any illegal state, the protocol will eventually reach a legal (or consistent) state, and resume its normal execution. To achieve stabilization, the protocol must be able to detect the error as soon as it occurs, and then it must recover from that error and revert back to a legal protocol state. The later issue related to recovery is tackled here, and an efficient procedure for the recovery in communications protocols is described. The recovery procedure does not require periodic checkpointing and, therefore, is less intrusive. It requires less time for rollback and fewer recovery control messages than other procedures. Only a minimal number of processes will roll back, and a minimal number of protocol messages will be retransmitted during recovery. Moreover, our procedure requires minimal stable storage to be used to record contextual information exchanged during the progress of the protocol. Finally, our procedure is compared with an existing recovery procedure, and an illustrative example is provided.", "", "Hardware and software systems will inevitably grow in scale and functionality. Because of this increase in complexity, the likelihood of subtle errors is much greater. Moreover, some of these errors may cause catastrophic loss of money, time, or even human life. A major goal of software engineering is to enable developers to construct systems that operate reliably despite this complexity. One way of achieving this goal is by using formal methods, which are mathematically based languages, techniques, and tools for specifying and verifying such systems. Use of formal methods does not a priori guarantee correctness. However, they can greatly increase our understanding of a system by revealing inconsistencies, ambiguities, and incompleteness that might otherwise go undetected. The first part of this report assesses the state of the art in specification and verification. For verification, we highlight advances in model checking and theorem proving. In the three sections on specification, model checking, and theorem proving, we explain what we mean by the general technique and briefly describe some successful case studies and well-known tools. The second part of this report outlines future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer. We close with summary remarks and pointers to resources for more information." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a good number of publications dealing with conformance testing @cite_19 @cite_23 @cite_22 @cite_5 . However, conformance testing verifies that an implementation (as a black box) adheres to a given specification of the protocol by constructing input output sequences. Conformance testing is useful during the implementation testing phase --which we do not address in this paper-- but does not address performance issues nor topology synthesis for design testing. By contrast, our method synthesizes test scenarios for protocol design, according to evaluation criteria.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_22", "@cite_23" ], "mid": [ "2073074602", "1965028135", "2029755436", "2326128742" ], "abstract": [ "Abstract We present simple randomized algorithms for the fault detection problem : Given a specification in the form of a deterministic finite state machine A and an implementation machine B , determine whether B is equal to A . If A has n states and p inputs, then in randomized polynomial time we can construct with high probability a checking sequence of length O ( pn 4 log n ), i.e., a sequence that detects all faulty machines with at most n states. Better bounds can be obtained in certain cases. The techniques generalize to partially specified finite state machines.", "Abstract A procedure presented here generates test sequences for checking the conformity of an implementation to the control portion of a protocol specification, which is modeled as a deterministic finite-state machine (FSM). A test sequence generated by the procedure given here tours all state transitions and uses a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an input output behavior that is not exhibited by any other state. An algorithm is presented for generating a minimum-length UIO sequence, should it exist, for a given state. UIO sequences may not exist for some states.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state.", "" ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
Automatic test generation techniques have been used in several fields. VLSI chip testing @cite_12 uses test vector generation to detect target faults. Test vectors may be generated based on circuit and fault models, using the fault-oriented technique, that utilizes implication techniques. These techniques were adopted in @cite_25 to develop fault-oriented test generation (FOTG) for multicast routing. @cite_25 , FOTG was used to study correctness of a multicast routing protocol on a LAN. We extend FOTG to study performance of end-to-end multipoint mechanisms. We introduce the concept of a virtual LAN to represent the underlying network, integrate timing and delay semantics into our model and use performance criteria to drive our synthesis algorithm.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "1962021926", "1554885925" ], "abstract": [ "We present a new algorithm for automatic test generation for multicast routing. Our algorithm processes a finite state machine (FSM) model of the protocol and uses a mix of forward and backward search techniques to generate the tests. The output tests include a set of topologies, protocol events and network failures, that lead to violation of protocol correctness and behavioral requirements. We target protocol robustness in specific, and do not attempt to verify other properties in this paper. We apply our method to a multicast routing protocol; PIM-DM, and investigate its behavior in the presence of selective packet loss on LANs and router crashes. Our study unveils several robustness violations in PIM-DM, for which we suggest fixes with the aid of the presented algorithm.", "For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design." ] }
cs0007002
2949562458
Many problems in robust control and motion planning can be reduced to either find a sound approximation of the solution space determined by a set of nonlinear inequalities, or to the guaranteed tuning problem'' as defined by Jaulin and Walter, which amounts to finding a value for some tuning parameter such that a set of inequalities be verified for all the possible values of some perturbation vector. A classical approach to solve these problems, which satisfies the strong soundness requirement, involves some quantifier elimination procedure such as Collins' Cylindrical Algebraic Decomposition symbolic method. Sound numerical methods using interval arithmetic and local consistency enforcement to prune the search space are presented in this paper as much faster alternatives for both soundly solving systems of nonlinear inequalities, and addressing the guaranteed tuning problem whenever the perturbation vector has dimension one. The use of these methods in camera control is investigated, and experiments with the prototype of a declarative modeller to express camera motion using a cinematic language are reported and commented.
The method presented by @cite_40 is strongly related to the one we present in the following, since they rely on usual interval constraint solving techniques to compute sound boxes for some constraint system. Starting from a seed that is known to belong to the solution space, they enlarge the domain of the variables around it in such a way that the new box computed is still included in the solution space. They do so by using local consistency techniques to find the points at which the truth value of the constraints change. Their algorithm is particularly well suited for the applications they target, the enlargement of tolerances. It is however not designed to solve the guaranteed tuning problem. In addition, it is necessary to obtain a seed for each connected subset of the solution space, and to apply the algorithm on each seed if one is interested in computing several solutions (e.g. to ensure representativeness of the samples).
{ "cite_N": [ "@cite_40" ], "mid": [ "1557280638" ], "abstract": [ "This paper introduces a new framework for extending consistent domains of numeric CSP. The aim is to offer the greatest possible freedom of choice for one variable to the designer of a CAD application. Thus, we provide here an efficient and incremental algorithm which computes the maximal extension of the domain of one variable. The key point of this framework is the definition, for each inequality, of an univariate extrema function which computes the left most and right most solutions of a selected variable (in a space delimited by the domains of the other variables). We show how these univariate extrema functions can be implemented efficiently. The capabilities of this approach are illustrated on a ballistic example." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
JAFIMA (Java Framework for Intelligent and Mobile Agents) @cite_12 takes a different approach from the other tools: it is primarily targeted at expert developers who want to develop agents from scratch based on the abstract classes provided, so the programming effort is greater than in the other tools. The weakest point of JAFIMA is its rule-based mechanism for defining agents' behavior. This mechanism does not support complex behaviors such as on-line planning or learning. Moreover, the abstractions for representing mental states lack flexibility and services for manipulating symbolic data.
{ "cite_N": [ "@cite_12" ], "mid": [ "1514474014" ], "abstract": [ "BUSINESS FRAMEWORKS (M. Fayad). Domain Framework for Sales Promotions (A. Dalebout, et al). A Reflective and Repository-Based Framework (M. Devos & M. Tilman). ARTIFICIAL INTELLIGENCE AND AGENT APPLICATION FRAMEWORKS (M. Fayad). Speech Recognition Framework (S. Srinivasan & J. Vergo). Neural Network Components (F. Beckenkamp & W. Pree). A Framework for Agent Systems (E. Kendall, et al). A Model for Reusable Agent Systems (D. Brugali & K. Sycara). Experimentation with an Agent-Oriented Platform in JAVA (P. Marcenac & R. Courdier). SPECIALIZED TOOL FRAMEWORKS (M. Fayad). CSP++: A Framework for Executable Specifications (W. Gardner & M. Serra). Applying Inheritance beyond Class-Based Languages (G. Banavar & G. Lindstrom). Luthier: Building Framework-Visualization Tools (M. Campo & R. Price). Scalable Architecture for Reliable, High-Volume Datafeed Handlers (R. Kannan). LANGUAGE-SPECIFIC FRAMEWORKS (M. Fayad). Hierarchical and Distributed Constraint Satisfaction Systems (D. Brugali). Modeling Collections of Changing Interdependent Objects (A. Ahmed, et al). Oberon with Gadgets: A Simple Component Framework (J. Gutknecht & M. Franz). Inheritance Management and Method Dispatch Framework (W. Holst & D. Szafron). Constraint Satisfaction Problems Framework (P. Roy, et al). Developing Frameworks to Support Design Reuse (H. Erdogmus & O. Tanir). Language Support for Application Framework Design (G. Hedin & J. Knudsen). SYSTEM APPLICATION FRAMEWORKS (M. Fayad). Tigger: A Framework Supporting Distributed and Persistent Objects (V. Cahill). The Deja Vu Scheduling Class Library (J. Dorn). A Framework for Graphics Recognition (L. Wenyin & D. Dori). A JavaBeans Framework for Cryptographic Protocols (P. Nikander & J. Parssinen). Dynamic Database Instance Framework (D. Janello, et al). Compound User Interfaces Framework (C. Szyperski & C. Pfister). EXPERIENCES IN APPLICATION FRAMEWORKS (M. Fayad). Framework Developing Using Patterns (B. Woolf). Experiences with the Semantic Graphics Framework (A. Rosel & K. Erni). Enterprise Model-Based Framework (J. Greenfield & A. Chatterjee). Appendices. Index." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
A framework as Brainstorm J is not just a collection of components but also defines a generic design. When programmers use a framework they reuse that design and save time and effort. In addition, because of the bidirectional flow of control frameworks can contain much more functionality than a traditional library regardless if it is a procedural or class library @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "38113107" ], "abstract": [ "A fuel unit for a hybrid hot-gas generator comprises two consumable constituent parts of different compositions which are separated by a partition which in storage conditions is stable and chemically inert with respect to said two parts and which is consumable during operation of the generator. Reactions between said constituent parts during storage, which might impair the combustion of the fuel unit, are thereby prevented." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Our definition of correlation-intractability is related to a definition by Okamoto @cite_21 . Using our terminology, Okamoto considers function ensembles for which it is infeasible to form input-output relations with respect to a specific evasive relation [Def. 19] Ok92 (rather than all such relations). He uses the assumption that such function ensembles exists, for a specific evasive relation in [Thm. 20] Ok92 .
{ "cite_N": [ "@cite_21" ], "mid": [ "1554259298" ], "abstract": [ "This paper presents a three-move interactive identification scheme and proves it to be as secure as the discrete logarithm problem. This provably secure scheme is almost as efficient as the Schnorr identification scheme, while the Schnorr scheme is not provably secure. This paper also presents another practical identification scheme which is proven to be as secure as the factoring problem and is almost as efficient as the Guillou-Quisquater identification scheme: the Guillou-Quisquater scheme is not provably secure. We also propose practical digital signature schemes based on these identification schemes. The signature schemes are almost as efficient as the Schnorr and Guillou-Quisquater signature schemes, while the security assumptions of our signature schemes are weaker than those of the Schnorr and Guillou-Quisquater. signature schemes. This paper also gives a theoretically generalized result: a three-move identification scheme can be constructed which is as secure as the random-self-reducible problem. Moreover, this paper proposes a variant which is proven to be as secuie as the difficulty of solving both the discrete logarithm problem and the specific factoring problem simultaneously. Some other variants such as an identity-based variant and an elliptic curve variant are also proposed." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
First steps in the direction of identifying and studying useful special-purpose properties of the have been taken by Canetti @cite_10 . Specifically, Canetti considered a property called perfect one-wayness'', provided a definition of this property, constructions which possess this property (under some reasonable assumptions), and applications for which such functions suffice. Additional constructions have been suggested by Canetti, Micciancio and Reingold @cite_4 . Another context where specific properties of the random oracle where captured and realized is the signature scheme of Gennaro, Halevi and Rabin @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_4" ], "mid": [ "1591954407", "2139033758", "122485702" ], "abstract": [ "We present a new signature scheme which is existentially unforgeable under chosen message attacks, assuming some variant of the RSA conjecture. This scheme is not based on \"signature trees\", and instead it uses the so called \"hash-and-sign\" paradigm. It is unique in that the assumptions made on the cryptographic hash function in use are well defined and reasonable (although non-standard). In particular, we do not model this function as a random oracle. We construct our proof of security in steps. First we describe and prove a construction which operates in the random oracle model. Then we show that the random oracle in this construction can be replaced by a hash function which satisfies some strong (but well defined!) computational assumptions. Finally, we demonstrate that these assumptions are reasonable, by proving that a function satisfying them exists under standard intractability assumptions.", "The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace — often without mathematical justification — the random oracle with a ‘cryptographic hash function’ (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security.", "" ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Following the preliminary version of the current work @cite_25 , Hada and Tanaka observed that the existence of even restricted correlation intractable functions (in the non uniform model) would be enough to prove that 3-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP @cite_0 . (Recall that auxiliary-input zero-knowledge is seemingly weaker than black-box zero-knowledge, and so the result of @cite_0 is incomparable to prior work of Goldreich and Krawczyk @cite_3 that showed that constant-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP.)
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_3" ], "mid": [ "1590334370", "", "1987890787" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "", "The wide applicability of zero-knowledge interactive proofs comes from the possibility of using these proofs as subroutines in cryptographic protocols. A basic question concerning this use is whether the (sequential and or parallel) composition of zero-knowledge protocols is zero-knowledge too. We demonstrate the limitations of the composition of zero-knowledge protocols by proving that the original definition of zero-knowledge is not closed under sequential composition; and that even the strong formulations of zero-knowledge (e.g., black-box simulation) are not closed under parallel execution. We present lower bounds on the round complexity of zero-knowledge proofs, with significant implications for the parallelization of zero-knowledge protocols. We prove that three-round interactive proofs and constant-round Arthur--Merlin proofs that are black-box simulation zero-knowledge exist only for languages in BPP. In particular, it follows that the \"parallel versions\" of the first interactive proofs systems presented for quadratic residuosity, graph isomorphism, and any language in NP, are not black-box simulation zero-knowledge, unless the corresponding languages are in BPP. Whether these parallel versions constitute zero-knowledge proofs was an intriguing open questions arising from the early works on zero-knowledge. Other consequences are a proof of optimality for the round complexity of various known zero-knowledge protocols and the necessity of using secret coins in the design of \"parallelizable\" constant-round zero-knowledge proofs." ] }
cs0011005
2952710481
This paper presents a practical solution for detecting data races in parallel programs. The solution consists of a combination of execution replay (RecPlay) with automatic on-the-fly data race detection. This combination enables us to perform the data race detection on an unaltered execution (almost no probe effect). Furthermore, the usage of multilevel bitmaps and snooped matrix clocks limits the amount of memory used. As the record phase of RecPlay is highly efficient, there is no need to switch it off, hereby eliminating the possibility of Heisenbugs because tracing can be left on all the time.
Although much theoretical work has been done in the field of data race detection @cite_19 @cite_25 @cite_10 @cite_17 few implementations for general systems have been proposed. Tools proposed in the past had limited capabilities: they were targeted at programs using one semaphore @cite_11 , programs using only post wait synchronisation @cite_22 or programs with nested fork-join parallelism @cite_10 @cite_21 . The tools that come closest to our data race detection mechanism, apart from @cite_26 for a proprietary system, is an on-the-fly data race detection mechanism for the CVM (Concurrent Virtual Machine) system @cite_24 . The tool only instruments the memory references to distributed shared data (about 1 unable to perform reference identification: it will return the variable that was involved in a data race, but not the instructions that are responsible for the reference.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_17", "@cite_24", "@cite_19", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2168171401", "32942804", "", "", "2162446957", "2147506153", "2170200862", "2088270410", "" ], "abstract": [ "We describe an integrated approach to support debugging of nondeterministic concurrent programs. Our tool provides reproducible program behavior and incorporates mechanisms to identify synchronization bugs commonly termed data races or access anomalies. Both features are based on partially ordered event logs captured at run time. Our mechanism identifies a race condition that is guaranteed to be unaffected by other races in the considered execution. Data collection and analysis for race detection has no impact on the original computation since it is done in replay mode. The race detection and execution replay mechanisms are integrated in the MOSKITO operating system.", "This paper presents results on the complexity of computing event orderings for sharedmemory parallel program executions. Given a program execution, we formally define the problem of computing orderings that the execution must have exhibited or could have exhibited, and prove that computing such orderings is an intractable problem. We present a formal model of a shared-memory parallel program execution on a sequentially consistent processor, and discuss event orderings in terms of this model. Programs are considered that use fork join and either counting semaphores or event style synchronization. We define a feasible program execution to be an execution of the program that performs the same events as an observed execution, but which may exhibit different orderings among those events. Any program execution exhibiting the same data dependences among the shared data as the observed execution is feasible. We define several relations that capture the orderings present in all (or some) of these feasible program executions. The happened-before, concurrent-with, and ordered-with relations are defined to show events that execute in a certain order, that execute concurrently, or that execute in either order but not concurrently. Each of these ordering relations is defined in two ways. In the must-have sense they show the orderings that are guaranteed to be present in all feasible program executions, and in the could-have sense they show the orderings that could potentially occur in at least one feasible program execution due to timing variations. We prove that computing any of the must-have ordering relations is a co-NP-hard problem and that computing any of the could-have ordering relations is an NP-hard problem.", "", "", "", "For shared-memory systems, the most commonly assumed programmer’s model of memory is sequential consistency. The weaker models of weak ordering, release consistency with sequentially consistent synchronization operations, data-race-free-O, and data-race-free-1 provide higher performance by guaranteeing sequential consistency to only a restricted class of programs - mainly programs that do not exhibit data races. To allow programmers to use the intuition and algorithms already developed for sequentially consistent systems, it is impontant to determine when a program written for a weak system exhibits no data races. In this paper, we investigate the extension of dynamic data race detection techniques developed for sequentially consistent systems to weak systems. A potential problem is that in the presence of a data race, weak systems fail to guarantee sequential consistency and therefore dynamic techniques may not give meaningful results. However, we reason that in practice a weak system will preserve sequential consistency at least until the “first” data races since it cannot predict if a data race will occur. We formalize this condition and show that it allows data races to be dynamically detected. Further, since this condition is already obeyed by all proposed implementations of weak systems, the full performance of weak systems can be exploited.", "Detecting data races in shared-memory parallel programs is an important debugging problem. This paper presents a new protocol for run-time detection of data races in ex­ ecutions of shared-memory programs with nested fork-join parallelism and no other inter-thread synch ronization. This protocol has signifi cantly smaller worst-case run-time over­ head than previous techniques. The worst-case space re­ quired by our protocol when monitoring an execution of a program P is O(V N), where V is the number of shared variables in P, and N is the maximum dynamic nesting of parallel constructs in P's execution. The worst-case time required to perform any monitoring operation is O(N). We formally prove that our new protocol always reports a non­ empty subset of the data races in a monitored program ex­ ecution and describe how this property leads to an effective debugging strategy.", "For shared-memory parallel programs that use explicit synchronization, data race detection is an important part of debugging. A data race exists when concurrently executing sections of code access common shared variables. In programs intended to be data race free, they are sources of nondeterminism usually considered bugs. Previous methods for detecting data races in executions of parallel programs can determine when races occurred, but can report many data races that are artifacts of others and not direct manifestations of program bugs. Artifacts exist because some races can cause others and can also make false races appear real. Such artifacts can overwhelm the programmer with information irrelevant for debugging. This paper presents results showing how to identify nonartifact data races by validation and ordering. Data race validation attempts to determine which races involve events that either did execute concurrently or could have (called feasible data races). We show how each detected race can either be guaranteed feasible, or when insufficient information is available, sets of races can be identified within which at least one is guaranteed feasible. Data race ordering attempts to identify races that did not occur only as a result of others. Data races can be partitioned so that it is known whether a race in one partition may have affected a race in another. The first partitions are guaranteed to contain at least one feasible data race that is not an artifact of any kind. By combining validation and ordering, the programmer can be directed to those data races that should be investigated first for debugging. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by National Science Foundation grant CCR-8815928, Office of Naval Research grant N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant. To appear in Proc. of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Williamsburg, VA, April 1991.", "" ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Analysis of malfunctioning systems based on their intended logical specification has been studied in the field of artificial intelligence @cite_9 and known as model-based diagnosis, which has some similarities with our work. However, the purpose of model-based diagnosis is to analyze the differences between intended and observed behaviors, while our system does not require that the intended behavior of a program be given as declarations.
{ "cite_N": [ "@cite_9" ], "mid": [ "2108309071" ], "abstract": [ "Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to determine those components of the system which, when assumed to be functioning abnormally, will explain the discrepancy between the observed and correct system behaviour. We propose a general theory for this problem. The theory requires only that the system be described in a suitable logic. Moreover, there are many such suitable logics, e.g. first-order, temporal, dynamic, etc. As a result, the theory accommodates diagnostic reasoning in a wide variety of practical settings, including digital and analogue circuits, medicine, and database updates. The theory leads to an algorithm for computing all diagnoses, and to various results concerning principles of measurement for discriminating among competing diagnoses. Finally, the theory reveals close connections between diagnostic reasoning and nonmonotonic reasoning." ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Wand proposed an algorithm for diagnosing non-well-typed functional programs @cite_5 . His approach was to extend the unification algorithm for type reconstruction to record which symbol occurrence imposed which constraint. In contrast, our framework is built outside any underlying framework of constraint solving. It does not incur any overhead for well-moded typed programs or modify the constraint-solving algorithm.
{ "cite_N": [ "@cite_5" ], "mid": [ "2045313089" ], "abstract": [ "It is a truism that most bugs are detected only at a great distance from their source. Although polymorphic type-checking systems like those in ML help greatly by detecting potential run-time type errors at compile-time, such systems are still not very helpful for locating the source of a type error. Typically, an error is reported only when the type-checker can proceed no further, even though the programmer's actual error may have occurred much earlier in the text. We describe an algorithm which appears to be quite helpful in isolating and explaining the source of type errors. The algorithm works by keeping track of the reasons the checker makes deductions about the types of variables." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
Hansen @cite_3 , @cite_5 , and Kearfott @cite_1 opt for the straightforward @math notation. Hansen mostly presents bounds with few digits, but for instance on page 178 we find @math demonstrating the problems addressed here.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_3" ], "mid": [ "186367297", "2129480103", "2067648572" ], "abstract": [ "From the Publisher: This book offers a general discussion on arithmetic and computational reliability, analytical mathematics and verification techniques, algorithms, and (most importantly) actual C++ implementations. In each chapter, examples, exercises, and numerical results demonstrate the application of the routines presented. The book introduces many computational verification techniques. It is not assumed that the reader has any prior formal knowledge of numerical verfication or any familiarity with interval analysis. The necessary concepts are introduced.", "List of Figures. List of Tables. Preface. 1. Preliminaries. 2. Software Environments. 3. On Preconditioning. 4. Verified Solution of Nonlinear Systems. 5. Optimization. 6. Non-Differentiable Problems. 7. Use of Intermediate Quantities in the Expression Values. References. Index.", "Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
The standard notation in the Numerica book @cite_2 solves the scanning problem in an interesting way. It uses the idea of the @math notation, but writes instead @math . This variation has the advantage of not introducing new notation. The reason why we still prefer factored notation is clear from the @math example ), which, if rewritten as @math becomes @math Although it is attractive not to introduce special-purpose notation, there is so much redundancy here that the factored alternative: @math seems worth the new notation.
{ "cite_N": [ "@cite_2" ], "mid": [ "2167206762" ], "abstract": [ "Part 1 Introduction: nonlinear programming local methods global methods Numerica outline. Part 2 A tour of Numerica: getting started generic constraints constants ranges input parameters aggregation operators functions sets unconstrained optimization constrained optimization local constraint solving local unconstrained optimization soft constraints real constraints and uncertain data display accuracy. Part 3 The meaning of Numerica: interval analysis constraint solving unconstrained optimization interpretation of the results. Part 4 Modelling in Numerica: what can go wrong in Numerica improving Numerica statements. Part 5 The syntax of Numerica: overall structure expressions the constant section the input section the set section the variable section the function section the body section the display section the pragma section scoping rules. Part 6 The semantics of Numerica: interval arithmetic semantics of constraint solving semantics of unconstrained minimization semantics of constrained minimization non-canonical boxes. Part 7 An implementation of Numerica: overview of the algorithm domain-specific and monotonic interval extensions constraint solving unconstrained optimization constrained optimization advanced techniques an implementation of box consistency. Part 8 Experimental results: constraint solving unconstrained optimization constrained optimization appendices." ] }
cs0102017
2950044032
Parallel jobs are different from sequential jobs and require a different type of process management. We present here a process management system for parallel programs such as those written using MPI. A primary goal of the system, which we call MPD (for multipurpose daemon), is to be scalable. By this we mean that startup of interactive parallel jobs comprising thousands of processes is quick, that signals can be quickly delivered to processes, and that stdin, stdout, and stderr are managed intuitively. Our primary target is parallel machines made up of clusters of SMPs, but the system is also useful in more tightly integrated environments. We describe how MPD enables much faster startup and better runtime management of parallel jobs. We show how close control of stdio can support the easy implementation of a number of convenient system utilities, even a parallel debugger. We describe a simple but general interface that can be used to separate any process manager from a parallel library, which we use to keep MPD separate from MPICH.
Many systems are intended to manage a collection of computing resources for both single-process and parallel jobs; see the survey by @cite_7 . Typically, these use a daemon that manages individual processes, with emphasis on jobs involving only a single process. Widely used systems include PBS @cite_11 , LSF @cite_0 , DQS @cite_22 , and Loadleveler POE @cite_9 . The Condor system @cite_23 is also widely used and supports parallel programs that use PVM @cite_2 or MPI @cite_8 @cite_12 . More specialized systems, such as MOSIX @cite_13 and GLUnix @cite_1 , provide single-system image support for clusters. Harness @cite_19 @cite_4 shares with MPD the goal of supporting management of parallel jobs. Its primary research goal is to demonstrate the flexibility of the plug-in'' approach to application design, potentially providing a wide range of services. The MPD system focuses more specifically on the design and implementation of services required for process management of parallel jobs, including high-speed startup of large parallel jobs on clusters and scalable standard I O management. The book @cite_10 provides a good overview of metacomputing systems and issues, and Feitelson @cite_3 surveys support for scheduling parallel processes.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "1759783712", "", "116504777", "", "2091257550", "", "2043381104", "", "", "", "", "2004593585", "1513913200", "", "" ], "abstract": [ "Metacomputing frameworks have received renewed attention of late, fueled both by advances in hardware and networking, and by novel concepts such as computational grids. Harness is an experimental metacomputing system based upon the principle of dynamic reconfigurability not only in terms of the computers and networks that comprise the virtual machine, but also in the capabilities of the VM itself. These characteristics may be modified under user control via a \"plug-in\" mechanism that is the central feature of the system. In this paper we describe our preliminary experience in the design of PVM emulation by means of a set of plug-in.", "", "", "", "Preface Foreword 1. Grids in Context 2. Computational Grids I Applications 3 Distributed Supercomputing Applications 4 Real-Time Widely Distributed Instrumentation Systems 5 Data-Intensive Computing 6 Teleimmersion II Programming Tools 7 Application-Specific Tools 8 Compilers, Languages, and Libraries 9 Object-Based Approaches 10 High-Performance Commodity Computing III Services 11 The Globus Toolkit 12 High-Performance Schedulers 13 High-Throughput Resource Management 14 Instrumentation and Measurement 15 Performance Analysis and Visualization 16 Security, Accounting, and Assurance IV Infrastructure 17 Computing Platforms 18 Network Protocols 19 Network Quality of Service 20 Operating Systems and Network Interfaces 21 Network Infrastructure 22 Testbed Bridges from Research to Infrastructure Glossary Bibliography Contributor Biographies", "", "Recent improvements in network and workstation performance have made workstation clusters an attractive architecture for diverse workloads, including interactive sequential and parallel applications. Although viable hardware solutions are available today, the largest challenge in making such a cluster usable lies in the system software. This paper describes the design and implementation of GLUnix, operating system middleware for a cluster of workstations. GLUnix was designed to provide transparent remote execution, support for interactive parallel and sequential jobs, load ballancing, and backward compatibility for existing application binaries. GLUnix was constructed to be easily portable to a number of platforms. GLUnix has been in daily use for over two and a half years and is currently running on a 100-node cluster of Sun UltraSPARCs. This paper relates our experiences with designing, building, and operating GLUnix. We discuss three important design tradeoffs faced by any cluster system, and present the reasons for our choices. Each of these design decisions is then re-evaluated in light of both our experience and recent technological advancements. We then describe the user-level, centralized, event-driven architecture of GLUnix and highlight a number of aspects of the implementation. Performance and scalability measurements of the system indicate that a centralized, user-level design can scale gracefully to significant cluster sizes, incurring only an additional 220 μs of overhead per node for remote execution. The discussion focuses on the successes and failures we encountered while building and maintaining the system, including a characterization of the limitations of a user-level implementation and various features that were added to satisfy the user community. © 1998 John Wiley & Sons, Ltd.", "", "", "", "", "A continuing challenge to the scientific research and engineering communities is how to fully utilize computational hardware. In particular, the proliferation of clusters of high performance workstations has become an increasingly attractive source of compute power. Developments to take advantage of this environment have previously focused primarily on managing the resources, or on providing interfaces so that a number of machines can be used in parallel to solve large problems. Both approaches are desirable, and indeed should be complementary. Unfortunately, the resource management and parallel processing systems are usually developed by independent groups, and they usually do not interact well together. To bridge this gap, we have developed a framework for interfacing these two sorts of systems. Using this framework, we have interfaced PVM, a popular system for parallel programming with Condor, a powerful resource management system. This combined system is operational, and we have made further developments to provide a single coherent environment.", "Overview of MOSIX.- The UNIX file system.- Distributed UNIX file systems.- The UNIX process.- The MOSIX process.- The MOSIX linker.- Load balancing.- Scaling considerations.- System performance.- Distributed applications.", "", "" ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Bigrams have been used as features for word sense disambiguation, particularly in the form of collocations where the ambiguous word is one component of the bigram (e.g., @cite_10 , @cite_0 , @cite_9 ). While some of the bigrams we identify are collocations that include the word being disambiguated, there is no requirement that this be the case.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_10" ], "mid": [ "2949743947", "2101210369", "" ], "abstract": [ "In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named Lexas , on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. Lexas achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WordNet .", "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 .", "" ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Decision trees have been used in supervised learning approaches to word sense disambiguation, and have fared well in a number of comparative studies (e.g., @cite_2 , @cite_17 ). In the former they were used with the bag of word feature sets and in the latter they were used with a mixed feature set that included the part-of-speech of neighboring words, three collocations, and the morphology of the ambiguous word. We believe that the approach in this paper is the first time that decision trees based strictly on bigram features have been employed.
{ "cite_N": [ "@cite_17", "@cite_2" ], "mid": [ "176608537", "2949482574" ], "abstract": [ "The Naive Mix is a new supervised learning algorithm that is based on a sequential method for selecting probabilistic models. The usual objective of model selection is to find a single model that adequately characterizes the data in a training sample. However, during model selection a sequence of models is generated that consists of the best-fitting model at each level of model complexity. The Naive Mix utilizes this sequence of models to define a probabilistic model which is then used as a probabilistic classifier to perform word-sense disambiguation. The models in this sequence are restricted to the class of decomposable log-linear models. This class of models offers a number of computational advantages. Experiments disambiguating twelve different words show that a Naive Mix formulated with a forward sequential search and Akaike's Information Criteria rivals established supervised learning algorithms such as decision trees (C4.5), rule induction (CN2) and nearest-neighbor classification (PEBLS).", "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
The decision list is a closely related approach that has also been applied to word sense disambiguation (e.g., @cite_6 , @cite_14 , @cite_4 ). Rather than building and traversing a tree to perform disambiguation, a list is employed. In the general case a decision list may suffer from less fragmentation during learning than decision trees; as a practical matter this means that the decision list is less likely to be over--trained. However, we believe that fragmentation also reflects on the feature set used for learning. Ours consists of at most approximately 100 binary features. This results in a relatively small feature space that is not as likely to suffer from fragmentation as are larger spaces.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6" ], "mid": [ "2952637164", "1489348810", "" ], "abstract": [ "Word sense disambiguation algorithms, with few exceptions, have made use of only one lexical knowledge source. We describe a system which performs unrestricted word sense disambiguation (on all content words in free text) by combining different knowledge sources: semantic preferences, dictionary definitions and subject domain codes along with part-of-speech tags. The usefulness of these sources is optimised by means of a learning algorithm. We also describe the creation of a new sense tagged corpus by combining existing resources. Tested accuracy of our approach on this corpus exceeds 92 , demonstrating the viability of all-word disambiguation rather than restricting oneself to a small sample.", "This paper describes a supervised algorithm for word sensedisambiguation based on hierarchies of decision lists. This algorithmsupports a useful degree of conditional branching while minimizing thetraining data fragmentation typical of decision trees. Classificationsare based on a rich set of collocational, morphological and syntacticcontextual features, extracted automatically from training data andweighted sensitive to the nature of the feature and feature class. Thealgorithm is evaluated comprehensively in the SENSEVAL framework,achieving the top performance of all participating supervised systems onthe 36 test words where training data is available.", "" ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Inversion of functions on sets is done implicitly by every algorithm for solving systems of equations @cite_10 --- in this case the input set just contains one zero vector. It is mentioned explicitly mostly for computing the solution set of systems of inequalities @cite_32 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_32" ], "mid": [ "2106750726", "1483874187", "" ], "abstract": [ "A new method is presented for characterizing the set of all values of the parameters of a linear time-invariant model that are associated with a stable behavior. A formal Routh table is used to formulate the problem as one of set inversion, which is solved approximately but globally with tools borrowed from interval analysis. The method readily extends to the design of controllers stabilizing all models in a given class. >", "Preface Symbol index 1. Basic properties of interval arithmetic 2. Enclosures for the range of a function 3. Matrices and sublinear mappings 4. The solution of square linear systems of equations 5. Nonlinear systems of equations 6. Hull computation References Author index Subject index.", "" ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
First-order constraints occur frequently in control, and especially robust control. Up to now they either have been solved by specialized methods @cite_19 @cite_3 @cite_1 or by applying general solvers like QEPCAD @cite_27 . In the first case one is usually restricted to conditions like linearity, and in the second case one suffers from the high run-time complexity of computing exact solutions @cite_5 @cite_17 . We know of only one case where general solvers for first-order constraints have been applied to discrete-time systems @cite_26 , but they have been frequently applied to continuous systems @cite_14 @cite_15 @cite_25 . For non-linear discrete-time systems without perturbations or control, interval methods have also proved to be an important tool @cite_20 @cite_13 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_1", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2114681281", "2061749308", "2124488301", "1588998206", "", "2007074311", "2104715607", "2029381483", "63626580", "", "2121579770", "1601908671" ], "abstract": [ "State and output dead beat controllability tests for a very large class of polynomial systems with rational coefficients may be based on the quantifier elimination by partial cylindrical algebraic decomposition (QEPCAD) symbolic computation program. The method is unified for a very large class of systems and can handle one- or two-sided control constraints. Families of minimum time state output dead beat controllers are obtained. The computational complexity of the test is prohibitive for general polynomial systems, but by constraining the structure of the system we may beat the curse of complexity. A computationally less expensive algebraic test for output dead beat controllability for a class of odd polynomial systems is presented. Necessary and sufficient conditions are given. They are still very difficult to check. Therefore, a number of easier-to-check sufficient conditions are also provided. The latter are based on the Grobner basis method and QEPCAD. It is shown on a subclass of odd polynomial systems how it is possible to further reduce the computational complexity by exploiting the structure of the system.", "Many problems in control theory can be formulated as formulae in the first-order theory of real closed fields. In this paper we investigate some of the expressive power of this theory. We consider dynamical systems described by polynomial differential equations subjected to constraints on control and system variables and show how to formulate questions in the above framework which can be answered by quantifier elimination. The problems treated in this paper regard stationarity, stability, and following of a polynomially parametrized curve. The software package QEPCAD has been used to solve a number of examples.", "Abstract This paper surveys a subset of the body of research which was sparked by Kharitonov's Theorem. The focal point is extreme point results for robust stability and robust performance. That is, we give conditions under which satisfaction of a performance specification is ascertained for a family of systems by only checking a finite subset of the extreme members of this family. The results which are surveyed apply mainly to systems with structured real parametric uncertainty. In addition, a number of counterexamples are given to illustrate cases for which an extreme point result does not hold. For such cases, a solution via the so-called Edge Theorem is often possible.", "1. Introduction. 2. Linear Algebra. 3. Linear Systems. 4. H2 and Ha Spaces. 5. Internal Stability. 6. Performance Specifications and Limitations. 7. Balanced Model Reduction. 8. Uncertainty and Robustness. 9. Linear Fractional Transformation. 10. m and m- Synthesis. 11. Controller Parameterization. 12. Algebraic Riccati Equations. 13. H2 Optimal Control. 14. Ha Control. 15. Controller Reduction. 16. Ha Loop Shaping. 17. Gap Metric and ...u- Gap Metric. 18. Miscellaneous Topics. Bibliography. Index.", "", "The Cylindrical Algebraic Decomposition method (CAD) decomposes R^r into regions over which given polynomials have constant signs. An important application of CAD is quantifier elimination in elementary algebra and geometry. In this paper we present a method which intermingles CAD construction with truth evaluation so that parts of the CAD are constructed only as needed to further truth evaluation and aborts CAD construction as soon as no more truth evaluation is needed. The truth evaluation utilizes in an essential way any quantifiers which are present and additionally takes account of atomic formulas from which some variables are absent. Preliminary observations show that the new method is always more efficient than the original, and often significantly more efficient.", "We consider linear problems in fields, ordered fields, discretely valued fields (with finite residue field or residue field of characteristic zero) and fields with finitely many independent orderings and discrete valuations. Most of the fields considered will be of characteristic zero. Formally, linear statements about these structures (with parameters) are given by formulas of the respective first-order language, in which all bound variables occur only linearly. We study symbolic algorithms (linear elimination procedures) that reduce linear formulas to linear formulas of a very simple form, i.e. quantifier-free linear formulas, and algorithms (linear decision procedures) that decide whether a given linear sentence holds in all structures of the given class. For all classes of fields considered, we find linear elimination procedures that run in double exponential space and time. As a consequence, we can show that for fields (with one or several discrete valuations), linear statements can be transferred from characteristic zero to prime characteristic p, provided p is double exponential in the length of the statement. (For similar bounds in the non-linear case, see Brown, 1978.) We find corresponding linear decision procedures in the Berman complexity classes @[email protected]?NSTA(*,2^c^n,dn) for d = 1, 2. In particular, all hese procedures run in exponential space. The technique employed is quantifier elimination via Skolem terms based on Ferrante & Rackoff (1975). Using ideas of Fischer & Rabin (1974), Berman (1977), Furer (1982), we establish lower bounds for these problems showing that our upper bounds are essentially tight. For linear formulas with a bounded number of quantifiers all our algorithms run in polynomial time. For linear formulas of bounded quantifier alternation most of the algorithms run in time 2^O^(^n^^^k^) for fixed k.", "This paper shows how certain robust multi-objective feedback design problems can be reduced to quantifier elimination (QE) problems. In particular it is shown how robust stabilization and robust frequency domain performance specifications can be reduced to systems of polynomial inequalities with suitable logic quantifiers, ? and ?. Because of computational complexity the size of problems that can solved by QE methods is limited. However, the design problems considered here do not haveanalyticalsolutions, so that even the solution of modest-sized problems may be of practical interest.", "This paper deals with the determination of the position and orientation of a mobile robot from distance measurements provided by a belt of onboard ultrasonic sensors. The environment is assumed to be two-dimensional, and a map of its landmarks is available to the robot. In this context, classical localization methods have three main limitations. First, each data point provided by a sensor must be associated with a given landmark. This data-association step turns out to be extremely complex and time-consuming, and its results can usually not be guaranteed. The second limitation is that these methods are based on linearization, which makes them inherently local. The third limitation is their lack of robustness to outliers due, e.g., to sensor malfunctions or outdated maps. By contrast, the method proposed here, based on interval analysis, bypasses the data-association step, handles the problem as nonlinear and in a global way and is (extraordinarily) robust to outliers.", "", "Interval analysis is used to characterize the set of all input sequences with a given length that drive a nonlinear discrete-time state-space system from a given initial state to a given set of terminal states. No requirement other than computability (i.e. ability to be evaluated by a finite algorithm) is put on the nature of the state equations. The method is based on an algorithm for set inversion and approximates the solution set in a guaranteed way.", "PLEASE NOTE: The original Technical Report TR00853 is missing. A copy can be found at http: www.sciencedirect.com science article pii S0747717110800033" ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Apart from the method used in this paper @cite_8 , there there have been several successful attempts at solving special cases of first-order constraints, for example using classical interval techniques @cite_36 @cite_21 or constraint satisfaction @cite_0 , and very often in the context of robust control @cite_2 @cite_30 @cite_28 @cite_11 .
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_36", "@cite_28", "@cite_21", "@cite_0", "@cite_2", "@cite_11" ], "mid": [ "1988549558", "1504204411", "2403711553", "1972645441", "1542502870", "2611367645", "1596669356", "68519879" ], "abstract": [ "Many design problems, e.g. in control theory, amount to tuning a parameter vector c so as to guarantee that specifications are met for all feasible values of some unknown perturbation vector p. A new prototype algorithm for solving this guaranteed-tuning problem is proposed, and its convergence properties are established. It applies when the design specifications translate into a finite number of (possibly nonlinear) inequalities. Three test cases taken from the field of control are considered, namely the design of a PID controller robust to structured uncertainty, the control of a nonlinear discrete-time model with uncertain parameters and initial state, and a problem of motion planning, with obstacles to be avoided.", "This paper applies interval methods to a classical problem in computer algebra. Let a quantified constraint be a first-order formula over the real numbers. As shown by A. Tarski in the 1930's, such constraints, when restricted to the predicate symbols <, = and function symbols +, ×, are in general solvable. However, the problem becomes undecidable, when we add function symbols like sin. Furthermore, all exact algorithms known up to now are too slow for big examples, do not provide partial information before computing the total result, cannot satisfactorily deal with interval constants in the input, and often generate huge output. As a remedy we propose an approximation method based on interval arithmetic. It uses a generalization of the notion of cylindrical decomposition—as introduced by G. Collins. We describe an implementation of the method and demonstrate that, for quantified constraints without equalities, it can efficiently give approximate information on problems that are too hard for current exact methods.", "In this paper, theidentification problem, thetolerance problem, and thecontrol problem are treated for the interval linear equation Ax=b. These problems require computing an inner approximation of theunited solution set Σ∃∃(A, b)= x ∈ ℝ n | (∃A ∈ A)(Ax ∈ b) , of thetolerable solution set Σ∀∃(A, b)= x ∈ ℝ n | (∀A ∈ A)(Ax ∈ b) , and of thecontrollable solution set Σ∃∀(A, b)= x ∈ ℝ n | (∀b ∈ b)(Ax ∈b) respectively. Analgebraic approach to their solution is developed in which the initial problem is replaced by that of finding analgebraic solution of some auxiliary interval linear system in Kaucher extended interval arithmetic. The algebraic approach is proved almost always to give inclusion-maximal inner interval estimates of the solutionsets considered. We investigate basic properties of the algebraic solutions to the interval linear systems and propose a number of numerical methods to compute them. In particular, we present the simple and fastsubdifferential Newton method, prove its convergence and discuss numerical experiments.", "Abstract Several robustness problems such as stability and performance robustness analysis of feedback systems and robust design of control systems in the presence of mixed nonlinear parametric and nonparametric perturbations can be solved by means of algorithms based on interval-arithmetic computation. Some of the main algorithms available in the literature are presented, and their efficiency is tested on some examples of robustness analysis and design of control systems. © 1997 Elsevier Science Ltd.", "The work advances a numerical technique for computing enclosures of generalized AE-solution sets to interval linear systems of equations. We develop an approach (called algebraic) in which the outer estimation problem reduces to a problem of computing algebraic solutions of an auxiliary interval equation in Kaucher complete interval arithmetic.", "Non-linear real constraint systems with universally and or existentially quantified variables often need be solved in such contexts as s control design or sensor planning. To date, these systems are mostly handled by computing a quantifier-free equivalent form by means of Cylindrical Algebraic Decomposition (CAD). However, CAD restricts its input to be conjunctions and disjunctions of polynomial constraints with rational coefficients, while some applications such as camera control involve systems with arbitrary forms where time is the only universally quantified variable. In this paper, the handling of universally quantified variables is first related to the computation of inner-approximation of real relations. Algorithms for solving non-linear real constraint systems with universally quantified variables are then presented along with the theoretical framework on inner-approximation of relations supporting them. These algorithms are based on the computation of outer-approximations of the solution set of the negation of involved constraints. An application to the devising of a declarative modeller for expressing camera motion using a cinematic language is sketched, and results from a prototype are presented.", "The following sections are included: introduction; quantifier elimination; Bernstein expansion; approximation of the solution set; algorithm; examples; conclusions; acknowledgment; and references.", "This paper aims to start exploring the application of interval techniques to deal with robustness issues in the context of predictive control. The robust stability problem is transformed into that of checking the positivity of a rational function. Modal intervals are presented as a useful tool to deal with this kind of function." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
ii) The ideal gas with the Haldane statistics and the Sutherland-Wu equation. The series @math has an interpretation of the grand partition function of the ideal gas with the Haldane exclusion statistics @cite_16 . The finite @math -system appeared in @cite_16 as the thermal equilibrium condition for the distribution functions of the same system. See also @cite_1 for another interpretation. The one variable case ) also appeared in @cite_26 as the thermal equilibrium condition for the distribution function of the Calogero-Sutherland model. As an application of our second formula in Theorem , we can quickly reproduce the cluster expansion formula'' in [Eq. (129)] I , which was originally calculated by the Lagrange inversion formula, as follows: where @math is the solution of ). The Sutherland-Wu equation also plays an important role for the conformal field theory spectra. (See @cite_23 and the references therein.)
{ "cite_N": [ "@cite_23", "@cite_16", "@cite_1", "@cite_26" ], "mid": [ "1995427979", "2095564275", "2000277525", "2013206718" ], "abstract": [ "Abstract We systematically study the exclusion statistics for quasi-particles for Conformal Field Theory spectra by employing a method based on recursion relations for truncated spectra. Our examples include generalized fermions in c CFT Z k parafermions, and spinons for the su ( n ) 1 , so ( n ) 1 and sp (2 n ) 1 Wess-Zumino-Witten models. For some of the latter examples we present explicit expressions for finitized affine characters and for the N -spinon decomposition of affine characters.", "We discuss the relationship between the classical Lagrange theorem in mathematics and the quantum statistical mechanics and thermodynamics of an ideal gas of multispecies quasiparticles with mutual fractional exclusion statistics. First, we show that the thermodynamic potential and the density of the system are analytically expressed in terms of the language of generalized cluster expansions, where the cluster coefficients are determined from Wu’s functional relations for describing the distribution functions of mutual fractional exclusion statistics. Second, we generalize the classical Lagrange theorem for inverting the one complex variable functions to that for the multicomplex variable functions. Third, we explicitly obtain all the exact cluster coefficients by applying the generalized Lagrange theorem. @S0163-1829 98!03335-9#", "We derive an exact integral representation for the gr and partition function for an ideal gas with exclusion statistics. Using this we show how the Wu's equation for the exclusion statistics appears in the problem. This can be an alternative proof for the Wu's equation. We also discuss that singularities are related to the existence of a phase transition of the system.", "We continue our investigation of a system of either fermions or bosons interacting in one dimension by a 2‐body potential V(r) = g r2. We first present an approximation for the eigenstates of a general 1‐dimensional quantum many‐body system. We then apply this approximation to the g r2 potential, allowing complete determination of the thermodynamic properties. Finally, comparing the results with those properties known exactly, we conjecture that the approximation is, in fact, exact for the g r2 potential." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
Below we list the related works on Conjectures and -- mostly chronologically. However, the list is by no means complete. The series @math in ) admits a natural @math -analogue called the fermionic formula . This is another fascinating subject, but we do not cover it here. See @cite_23 @cite_7 @cite_6 and reference therein. It is convenient to refer the formula ) with the binomial coefficient ) as type I , and the ones with the binomial coefficient in Remark as type II . (In the context of the -type integrable spin chains, @math and @math represent the numbers of @math -strings and @math -holes of color @math , respectively. Therefore one must demand @math , which implies that the relevant formulae are necessarily of type II.) The manifest expression of the decomposition of @math such as is referred as type III , where @math is the character of the irreducible @math -module @math with highest weight @math . Since there is no essential distinction between these conjectured formulae for @math and @math , we simply refer the both cases as @math below. At this moment, however, the proofs should be separately given for nonsimply-laced case @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_7", "@cite_23", "@cite_6" ], "mid": [ "2950112875", "1618024583", "1995427979", "2106856555" ], "abstract": [ "", "Fermionic formulae originate in the Bethe ansatz in solvable lattice models. They are specific expressions of some q-polynomials as sums of products of q-binomial coefficients. We consider the fermionic formulae associated with general non-twisted quantum affine algebra U_q(X^ (1) _n) and discuss several aspects related to representation theories and combinatorics. They include crystal base theory, one dimensional sums, spinon character formulae, Q-system and combinatorial completeness of the string hypothesis for arbitrary X_n.", "Abstract We systematically study the exclusion statistics for quasi-particles for Conformal Field Theory spectra by employing a method based on recursion relations for truncated spectra. Our examples include generalized fermions in c CFT Z k parafermions, and spinons for the su ( n ) 1 , so ( n ) 1 and sp (2 n ) 1 Wess-Zumino-Witten models. For some of the latter examples we present explicit expressions for finitized affine characters and for the N -spinon decomposition of affine characters.", "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 ." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
2 @cite_24 . Kerov . proposed and proved the type II formula for @math by the combinatorial method, where the bijection between the Littlewood-Richardson tableaux and the rigged configurations was constructed.
{ "cite_N": [ "@cite_24" ], "mid": [ "2038912750" ], "abstract": [ "Techniques developed in the realms of the quantum method of the inverse problem are used to analyze combinatorial problems (Young diagrams and rigged configurations)." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
4 @cite_14 . Ogievetsky and Wiegmann proposed the type III formula of @math for some @math for the exceptional algebras from the reproduction scheme.
{ "cite_N": [ "@cite_14" ], "mid": [ "2089188442" ], "abstract": [ "Abstract The factorized S -matrices as well as the eigenvalues of the transfer matrices for symmetrical degrees of fundamental representations of all Lie groups are presented in the form of the Bethe ansatz in terms of roots systems. As an application the explicit expressions of the S -matrices for the lowest dimension representations of some exceptional Lie groups and the solution of the principal chiral model are calculated." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
8 @cite_22 . Kleber analyzed a combinatorial structure of the type II formula for the simply-laced algebras. In particular, it was proved that the type III formula of @math and the corresponding type II formula are equivalent for @math and @math .
{ "cite_N": [ "@cite_22" ], "mid": [ "1653379927" ], "abstract": [ "The invention relates to a negative electrode for an alkaline electrolyte electric cell. The main electrode material is nickel lanthanide and is characterized by the fact that it also includes a mercury compound. It is applicable to secondary electric cells, in particular of the nickel- or silver-hydrogen type. Cells embodying such negative electrodes exhibit improved capacity irrespective of temperature and electrolyte concentration conditions as compared with like cells in which the mercury compound is not used with the nickel lanthanide." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
9 @cite_7 @cite_6 . Hatayama . gave a characterization of the type I formula as the solution of the @math -system which are @math -linear combinations of the @math -characters with the property equivalent to the convergence property ). Using it, the equivalence of the type III formula of @math and the type I formula of @math for the classical algebras was shown @cite_7 . In @cite_6 , the type I and type II formulae, and the @math -systems for the twisted algebras @math were proposed. The type III formula of @math for @math , @math , @math , @math was also proposed, and the equivalence to the type I formula was shown in a similar way to the untwisted case.
{ "cite_N": [ "@cite_6", "@cite_7" ], "mid": [ "2106856555", "1618024583" ], "abstract": [ "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 .", "Fermionic formulae originate in the Bethe ansatz in solvable lattice models. They are specific expressions of some q-polynomials as sums of products of q-binomial coefficients. We consider the fermionic formulae associated with general non-twisted quantum affine algebra U_q(X^ (1) _n) and discuss several aspects related to representation theories and combinatorics. They include crystal base theory, one dimensional sums, spinon character formulae, Q-system and combinatorial completeness of the string hypothesis for arbitrary X_n." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
10 @cite_13 @cite_10 . The second formula in Conjecture was proposed and proved for @math @cite_13 from the formal completeness of the -type Bethe vectors. The same formula was proposed for @math , and the equivalence to the type I formula was proved @cite_10 . The type I formula is formulated in the form ), and the characterization of type I formula in @cite_7 was simplified as the solution of the @math -system with the convergence property ).
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_7" ], "mid": [ "", "1994934410", "1618024583" ], "abstract": [ "", "The ( U _ q ( s l (2)) ) Bethe equation is studied at q = 0. A linear congruence equation is proposed related to the string solutions. The number of its off-diagonal solutions is expressed in terms of an explicit combinatorial formula and coincides with the weight multiplicities of the quantum space.", "Fermionic formulae originate in the Bethe ansatz in solvable lattice models. They are specific expressions of some q-polynomials as sums of products of q-binomial coefficients. We consider the fermionic formulae associated with general non-twisted quantum affine algebra U_q(X^ (1) _n) and discuss several aspects related to representation theories and combinatorics. They include crystal base theory, one dimensional sums, spinon character formulae, Q-system and combinatorial completeness of the string hypothesis for arbitrary X_n." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
11 @cite_2 . Chari proved the type III formula of @math for @math for any @math for the classical algebras, and for some @math for the exceptional algebras.
{ "cite_N": [ "@cite_2" ], "mid": [ "1797902674" ], "abstract": [ "Radiation curable acrylated polyurethane is porduced by (a) producing an isocyanate-terminated intermediate by coreacting an organic diisocyanate with a combination of organic tri tetraol and organic diol, said combination being chosen from polyester tri tetraol-polyether diol and polyether tri tetraol-polyester diol combinations; (b) reacting the isocyanate-terminated intermediate with an hydroxyacrylate such as 2-hydroxyethyl acrylate. Unexpectedly, the oligomer has desirably low viscosity, yet cures upon exposure to radiation to a coating having good physical properties." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
12 @cite_18 . Okado constructed bijections between the rigged configurations and the crystals (resp. virtual crystals) corresponding to @math , with @math for @math , for @math and @math (resp. @math ). As a corollary, the type II formula of those @math was proved for @math and @math .
{ "cite_N": [ "@cite_18" ], "mid": [ "1582992125" ], "abstract": [ "A device having utility in interfering with a person's desire to hold an object such as a cigarette, cigar or pipe, between his lips, i.e., an anti-smoking device. In the form of a cigarette holder, it includes a generally tubular shell having first and second ends, with the first end being adapted to receive a cigarette; the second end thereof includes structure adapted to be held between a person's lips. A DC voltage source (such as a dry cell battery) of at least six volts and preferably nine volts is mounted within the shell. First and second electrically conductive members are connected to the output of the DC source, with the distal ends of said conductive members extending alongside the lip-contacting structure so that they may be readily touched by a person's lips. The distal ends of said conductive members are separated so as to form a normally open electrical path, such that placing the lip-contacting structure between a person's lips will instantaneously close the electrical path and result in the discharge of DC current from said source through the lips. A potentiometer is optionally provided to adjust the flow of current from a minimum of about one milliamp (in order to be discernable) to a maximum of about five milliamps (so as to avoid intolerable sensations). Additionally, means are disclosed for re-charging a battery which is permanently mounted within the shell of a cigarette holder or the like." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As mentioned in the introduction, this is one of the few attempts to apply fold unfold techniques in the field of concurrent languages. In fact, in the literature we find only three papers which are relatively closely related to the present one: Ueda and Furukawa UF88 defined transformation systems for the concurrent logic language GHC @cite_7 , Sahlin Sah95 defined a partial evaluator for AKL, while de Francesco and Santone in DFS96 presented a transformation system for CCS @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_7" ], "mid": [ "1503973138", "2163058466" ], "abstract": [ "Foreword. 1. Modelling Communication. 2. Basic Definitions. 3. Equational laws and Their Application. 4. Strong Bisimulation and Strong Equivalence. 5. Bisimulation and Observation Equivalence. 6. Further Examples. 7. The Theory of Observation Congruence. 8. Defining a Programming Language. 9. Operators and Calculi. 10. Specifications and Logic. 11. Determinancy and Confluence. 12. Sources and Related Work. Bibliography. Index.", "A set of Horn clauses, augmented with a ‘guard’ mechanism, is shown to be a simple and yet powerful parallel logic programming language." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
The transformation system we are proposing builds on the systems defined in the papers above and can be considered an extension of them. Differently from the previous cases, our system is defined for a generic (concurrent) constraint language. Thus, together with some new transformations such as the distribution, the backward instantiation and the branch elimination, we introduce also specific operations which allow constraint simplification and elimination (though, some constraint simplification is done in @cite_9 as well).
{ "cite_N": [ "@cite_9" ], "mid": [ "1521956339" ], "abstract": [ "One of the main aims of this paper is to show that the nature of the communication mechanism of concurrent constraint languages is essentially different from the classical paradigms of CCS, CSP and ACP. We define indeed a compositional semantics based on linear sequences, while more complicated structures, like trees and failure sets, are needed to model composition ally CCS, CSP and ACP. From this model we are able to derive a fully abstract semantics by imposing some saturation conditions, that model the monotonic nature of communication in concurrent constraint languages. Finally, we show that if we eliminate the consistency check, and drop the distinction between success and deadlock, then our model is isomorphic to the semantics based on Scott’s closure operators proposed in [SRP91]." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As previously mentioned, differently from our case in @cite_9 it is considered a definition of which allows us to remove potentially selectable branches; the consequence is that the resulting transformation system is only (thus not totally) correct. We should mention that in @cite_9 two preliminary assumptions on the scheduling'' are made in such a way that this limitation is actually less constraining than it might appear.
{ "cite_N": [ "@cite_9" ], "mid": [ "1521956339" ], "abstract": [ "One of the main aims of this paper is to show that the nature of the communication mechanism of concurrent constraint languages is essentially different from the classical paradigms of CCS, CSP and ACP. We define indeed a compositional semantics based on linear sequences, while more complicated structures, like trees and failure sets, are needed to model composition ally CCS, CSP and ACP. From this model we are able to derive a fully abstract semantics by imposing some saturation conditions, that model the monotonic nature of communication in concurrent constraint languages. Finally, we show that if we eliminate the consistency check, and drop the distinction between success and deadlock, then our model is isomorphic to the semantics based on Scott’s closure operators proposed in [SRP91]." ] }
cs0203030
2952475254
We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms. When the paths are known (either given by the adversary or computed as above) our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this paper we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet. Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.
The problem of choosing routes for a fixed set of packets was studied by Srinivasan and Teo @cite_5 and Bertsimas and Gamarnik @cite_13 . For example, @cite_5 presents an algorithm that minimizes the congestion and dilation of the routes up to a constant factor. This result complemented the paper of Leighton, Maggs and Rao @cite_1 which showed that packets could be scheduled along a set of paths in time @math congestion @math dilation @math .
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_13" ], "mid": [ "2084427019", "1986061053", "" ], "abstract": [ "In this paper, we prove that there exists a schedule for routing any set of packets with edge-simple paths, on any network, inO(c+d) steps, wherec is the congestion of the paths in the network, andd is the length of the longest path. The result has applications to packet routing in parallel machines, network emulations, and job-shop scheduling.", "We present the first constant-factor approximation algorithm for a fundamental problem: the store-and-forward packet routing problem on arbitrary networks. Furthermore, the queue sizes required at the edges are bounded by an absolute constant. Thus, this algorithmbalances a global criterion (routing time) with a local criterion (maximum queue size) and shows how to get simultaneous good bounds for both. For this particular problem, approximating the routing time well, even without considering the queue sizes, was open. We then consider a class of such local vs. global problems in the context of covering integer programs and show how to improve the local criterion by a logarithmic factor by losing a constant factor in the global criterion.", "" ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
Coherent integration and proper representation of amalgamated data is extensively studied in the literature (see, e.g., @cite_40 @cite_36 @cite_2 @cite_19 @cite_27 @cite_21 @cite_37 @cite_10 @cite_12 @cite_6 @cite_33 ). Common approaches for dealing with this task are based on techniques of belief revision @cite_21 , methods of resolving contradictions by quantitative considerations (such as majority vote'' @cite_37 ) or qualitative ones (e.g., defining priorities on different sources of information or preferring certain data over another @cite_24 @cite_34 ), and approaches that are based on rewriting rules for representing the information in a specific form @cite_27 . As in our case, abduction is used for database updating in @cite_23 and an extended form of abduction is used in @cite_17 @cite_18 to explain modifications in a theory.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_36", "@cite_21", "@cite_6", "@cite_24", "@cite_19", "@cite_40", "@cite_27", "@cite_23", "@cite_2", "@cite_34", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2112965492", "1551453953", "1984704149", "", "1549828304", "2011524629", "1513716387", "1527742398", "2116204363", "", "1589487547", "", "", "2048333161", "1483885793", "86882622" ], "abstract": [ "The problem of integrating information from conflicting sources comes up in many current applications, such as cooperative information systems, heterogeneous databases, and multiagent systems. We model this by the operation of merging first-order theories. We propose a formal semantics for this operation and show that it has desirable properties, including abiding by majority rule in case of conflict and syntax independence. We apply our semantics to the special case when the theories to be merged represent relational databases under integrity constraints. We then present a way of merging databases that have different or conflicting schemas caused by problems such as synonyms, homonyms or type conflicts mentioned in the schema integration literature.", "This paper introduces techniques for updating knowledge bases represented in extended logic programs. Three different types of updates, view updates, theory updates, and inconsistency removal, are considered. We formulate these updates through abduction, and provide methods for computing them with update programs. An update program is an extended logic program which specifies changes on abductive hypotheses, then updates are computed by the U-minimal answer sets of an update program. The proposed technique provides a uniform framework for these different types of updates, and each update is computed using existing procedures of logic programming.", "The integration of knowledge for multiple sources is an importantaspect of automated reasoning systems. When different knowledge basesare used to store knowledge provided by multiple sources, we are facedwith the problem of integrating multiple knowledge bases: Under thesecircumstances, we are also confronted with the prospect ofinconsistency. In this paper we present a uniform theoretical framework,based on annotated logics , foramalgamating multiple knowledge bases when these knowledge bases(possibly) contain inconsistencies, uncertainties, and nonmonotonicmodes of negation. We show that annotated logics may be used, with somemodifications, to mediate betweendifferent knowledge bases. The multiple knowledge bases are amalgamatedby a transformation of the individual knowledge bases into new annotatedlogic programs, together with the addition of a new axiom scheme. Wecharacterize the declarative semantics of such amalgamated knowledgebases and study how the semantics of the amalgam is related to thesemantics of the individual knowledge bases being combined. —Author's Abstract", "", "The process of integrating knowledge coming from different sources has been widely investigated in the literature. Three distinct conceptual approaches to this problem have been most succesful: belief revision, merging and update. In this paper we present a framework that integrates these three approaches. In the proposed framework all three operations can be performed. We provide an example that can only be solved by applying more than one single style of knowledge integration and, therefore, cannot be addressed by anyone of the approaches alone. The framework has been implemented, and the examples shown in this paper (as well as other examples from the belief revision literature) have been successfully tested.", "Katsuno and Mendelzon divide theory change, the problem of adding new information to a logical theory, into two types: revision and update. We propose a third type of theory change: arbitration. The key idea is the following: the new information is considered neither better nor worse than the old information represented by the logical theory. The new information is simply one voice against a set of others already incorporated into the logical theory. From this follows that arbitration should be commutative. First we define arbitration by a set of postulates and then describe a model-theoretic characterization of arbitration for the case of propositional logical theories. We also study weighted arbitration where different models of a theory can have different weights.", "We present an approach for drawing plausible conclusions from inconsistent and incomplete knowledge-bases, which may also be prioritized. Our method is based on a four-valued semantics that is particularly suitable for reasoning with uncertainty. Our inference mechanism is closely related to some other well-known formalisms for handling inconsistent data, such as reasoning with maximal consistent subsets and possibilistic logic. It is shown that the formalism presented here is nonmonotonic, paraconsistent, and is capable of managing ranked data without having the “drowning problem”.", "In this paper we describe a new approach to repairing violations of integrity constraints in relational databases with null values. By adopting basic concepts from model-based diagnosis, we show how simultaneous reasons for violations of (different) constraints can be determined. These reasons, represented as sets of facts, directly indicate possible repair actions that guarantee to remove the observed violations.", "Combining knowledge present in multiple knowledge base systems into a single knowledge base is discussed. A knowledge based system can be considered an extension of a deductive database in that it permits function symbols as part of the theory. Alternative knowledge bases that deal with the same subject matter are considered. The authors define the concept of combining knowledge present in a set of knowledge bases and present algorithms to maximally combine them so that the combination is consistent with respect to the integrity constraints associated with the knowledge bases. For this, the authors define the concept of maximality and prove that the algorithms presented combine the knowledge bases to generate a maximal theory. The authors also discuss the relationships between combining multiple knowledge bases and the view update problem. >", "", "", "", "", "Abstract During the process of knowledge acquisition from different experts it is usual that contradictions occur. Therefore strategies are needed for dealing with divergent statements and conflicts. We provide a formal framework to represent, process and combine distributed knowledge. The representation formalism is many-valued logic, which is a widely accepted method for expressing uncertainty, vagueness, contradictions and lack of information. Combining knowledge as proposed here makes use of the bilattice approach, which turns out to be very flexible and suggestive in the context of combining divergent information. We give some guidelines for choosing truth value spaces, assigning truth values and defining global operators to encode integration strategies.", "We describe the theory and implementation of a general theorem-proving technique for checking integrity of deductive databases recently proposed by Sadri and Kowalski. The method uses an extension of the SLDNF proof procedure and achieves the effect of the simplification algorithms of Nicolas, Lloyd, , and Decker by reasoning forwards from the update and thus focusing on the relevant parts of the database and the relevant constraints.", "This paper proposes a method of nonmonotonic theory change. We first introduce a new form of abduction that can account for observations in nonmonotonic situation. Then we provide a framework of autoepistemic update, which describes nonmonotonic theory change through the extended abductive framework. The proposed update semantics is fairly general and provides a unified framework for various update semantics such as first-order update, view update of databases, and contradiction removal of nonmonotonic theories." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
The use of three-valued logics is also a well-known technique for maintaining incomplete or inconsistent information; such logics are often used for defining fixpoint semantics of incomplete logic programs @cite_32 @cite_3 , and so in principle they can be applied on integrity constraints in an (extended) clause form @cite_11 . Three-valued formalisms such as LFI @cite_0 are also the basis of paraconsistent methods to construct database repairs @cite_8 and are useful in general for pinpointing inconsistencies @cite_7 . As noted above, this is also the role of the three-valued semantics in our case.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_32", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "", "1854613159", "2003531456", "1968513265", "1785931840", "1489867543" ], "abstract": [ "", "When integrating data coming from multiple different sources we are faced with the possibility of inconsistency in databases. In this paper, we use one of the paraconsistent logics introduced in [9,7] (LFI1) as a logical framework to model possibly inconsistent database instances obtained by integrating different sources. We propose a method based on the sound and complete tableau proof system of LFI1 to treat both the integration process and the evolution of the integrated database submitted to users updates. In order to treat the integrated database evolution, we introduce a kind of generalized database context, the evolutionary databases, which are databases having the capability of storing and manipulating inconsistent information and, at the same time, allowing integrity constraints to change in time. We argue that our approach is sufficiently general and can be applied in most circumstances where inconsistency may arise in databases.", "The use of conventional classical logic is misleading for characterizing the behavior of logic programs because a logic program, when queried, will do one of three things: succeed with the query, fail with it, or not respond because it has fallen into infinite backtracking. In [7] Kleene proposed a three-valued logic for use in recursive function theory. The so-called third truth value was really undefined: truth value not determined. This logic is a useful tool in logic-program specification, and in particular, for describing models. (See [11].) Tarski showed that formal languages, like arithmetic, cannot contain their own truth predicate because one could then construct a paradoxical sentence that effectively asserts its own falsehood. Natural languages do allow the use of \"is true\", so by Tarski's argument a semantics for natural language must leave truth-value gaps: some sentences must fail to have a truth value. In [8] Kripke showed how a model having truth-value gaps, using Kleene's three-valued logic, could be specified. The mechanism he used is a famiUar one in program semantics: consider the least fixed point of a certain monotone operator. But that operator must be defined on a space involving three-valued logic, and for Kripke's application it will not be continuous. We apply techniques similar to Kripke's to logic programs. We associate with each program a monotone operator on a space of three-valued logic interpretations, or better partial interpretations. This space is not a complete lattice, and the operators are not, in general, continuous. But least and other fixed points do exist. These fixed points are shown to provide suitable three-valued program models. They relate closely to the least and greatest fixed points of the operators used in [1]. Because of the extra machinery involved, our treatment allows for a natural consideration of negation, and indeed, of the other prepositional connectives as well. And because of the elaborate structure of fixed points available, we are able to", "A general logic program (abbreviated to \"program\" hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the \"meaning of the program, \" or Its \"declarative semantics. \" Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a \"satisfactory\" total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of \"stratified\" and \"locally stratified\" programs, The method in this paper is also compared with other proposals in the literature, including Clark's \"program completion, \" Fitting's and Kunen's 3-vahred interpretations of it, and the \"stable models\" of Gelfond and Lifschitz.", "The logics of formal inconsistency (LFI’s) are logics that allow to explicitly formalize the concepts of consistency and inconsistency by means of formulas of their language. Contradictoriness, on the other hand, can always be expressed in any logic, provided its language includes a symbol for negation. Besides being able to represent the distinction between contradiction and inconsistency, LFI’s are non-explosive logics, in the sense that a contradiction does not entail arbitrary statements, but yet are gently explosive, in the sense that, adjoining the additional requirement of consistency, then contradictoriness do cause explosion. Several logics can be seen as LFI’s, among them the great majority of paraconsistent systems developed under the Brazilian and Polish tradition. We present here tableau systems for some important LFI’s: bC, Ci and LFI1.", "The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its relationships to other logics in the area of non-monotonic reasoning, logic programming and deductive databases, and to show its application for knowledge representation by giving a typology of definitional knowledge." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
A closely related topic is the problem of giving consistent query answers in inconsistent database @cite_26 @cite_15 @cite_27 . The idea is to answer database queries in a consistent way without computing the repairs of the database.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_26" ], "mid": [ "", "35390552", "2077518845" ], "abstract": [ "", "This paper investigates, several methods for coping with inconsistency caused by multiple source information by introducing suitable consequence relations capable of inferring non trivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, namely a selection of one or several consistent subsets of formulas, and then classical inference is used for inferring from these subsets. Two alternative methods that do not require any revision step are studied: inference based on arguments and a new approach called safely supported inference, where inconsistency is kept local. These two last methods look suitable when the inconsistency is due to the presence of several sources of information. The paper offers a comparative study of the various inference modes under inconsistency.", "In this paper we consider the problem of the logical characterization of the notion of consistent answer in a relational database that may violate given integrity constraints. This notion is captured in terms of the possible repaired versions of the database. A method for computing consistent answers is given and its soundness and completeness (for some classes of constraints and queries) proved. The method is based on an iterative procedure whose termination for several classes of constraints is proved as well." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Of the load-balancing algorithms based on load, a very common approach to performing load-balancing is to choose the server with the least reported load from among a set of servers. This approach performs well in a homogeneous system where the task allocation is performed by a single centralized entity (dispatcher) which has complete up-to-date load information @cite_25 @cite_35 . In a system where multiple dispatchers are independently performing the allocation of tasks, this approach however has been shown to behave badly, especially if load information used is stale @cite_28 @cite_46 @cite_13 @cite_47 . Mitzenmacher talks about the herd behavior'' that can occur when servers that have reported low load are inundated with requests from dispatchers until new load information is reported @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_28", "@cite_46", "@cite_13", "@cite_25" ], "mid": [ "1997049009", "2155859954", "2022049964", "2080912525", "345615294", "2064823719" ], "abstract": [ "We consider a queuing system consisting of a finite number of identical exponential servers. Each server has its own queue, and upon arrival each customer must be assigned to some server's queue. Under the assumption that no jockeying between queues is permitted, it is shown that the intuitively satisfying rule of assigning each arrival to the shortest line maximizes, with respect to stochastic order, the discounted number of customers to complete their service in any time t. QUEUING; SHORTEST LINE; STOCHASTIC ORDER; MARKOV DECISION PROCESSES", "The problem of judiciously and transparently redistributing the load of the system among its nodes so that overall performance is maximized is discussed. Several key issues in load distributing for general-purpose systems, including the motivations and design trade-offs for load-distributing algorithms, are reviewed. In addition, several load-distributing algorithms are described and their performances are compared. These algorithms are sender-initiated algorithms, receiver-initiated algorithms, symmetrically initiated algorithms, and adaptive algorithms. Load-distributing policies used in existing systems are examined, and conclusions about which algorithm might help in realizing the most benefits of load distributing are drawn. >", "Rather than proposing a specific load sharing policy for implementation, the authors address the more fundamental question of the appropriate level of complexity for load sharing policies. It is shown that extremely simple adaptive load sharing policies, which collect very small amounts of system state information and which use this information in very simple ways, yield dramatic performance improvements. These policies in fact yield performance close to that expected from more complex policies whose viability is questionable. It is concluded that simple policies offer the greatest promise in practice, because of their combination of nearly optimal performance and inherent stability.", "The authors study the performance characteristics of simple load-sharing algorithms for distributed systems. In the systems under consideration, it is assumed that nonnegligible delays are encountered in transferring tasks from one node to another and in gathering remote state information. Because of these delays, the state information gathered by the load-sharing algorithms is out of date by the time the load-sharing decisions are taken. The authors analyze the effects of these delays on the performance of three algorithms, called forward, reverse, and symmetric. They formulate queueing-theoretic models for each of the algorithms operating in a homogeneous system under the assumption that the task arrival process at each node is Poisson and the service times and task transfer times are exponentially distributed. Each of the models is solved using the matrix-geometric solution technique, and the important performance metrics are derived and studied. >", "", "We consider a queuing system with several identical servers, each with its own queue. Identical customers arrive according to some stochastic process and as each customer arrives it must be assigned to some server's queue. No jockeying amongst the queues is allowed. We are interested in assigning the arriving customers so as to maximize the number of customers which complete their service by a certain time. If each customer's service time is a random variable with a non-decreasing hazard rate then the strategy which does this is one which assigns each arrival to the shortest queue. QUEUING; SHORTEST LINE; STOCHASTIC ORDER; MULTI-SERVER" ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Dahlin proposes algorithms @cite_1 . These algorithms take into account the age (staleness) of the load information reported by each of a set of distributed homogeneous servers as well as an estimate of the rate at which new requests arrive at the whole system to determine to which server to allocate a request.
{ "cite_N": [ "@cite_1" ], "mid": [ "2109440766" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
@cite_16 propose an algorithm, which we call that first randomly selects @math servers. The algorithm then weighs the servers by load information and chooses a server with probability that is inversely proportional to the load reported by that server. When @math , where @math is the total number of servers, the algorithm is shown to perform better than previous load-based algorithms and for this reason we focus on this algorithm in this paper.
{ "cite_N": [ "@cite_16" ], "mid": [ "2152558834" ], "abstract": [ "URL, or layer-5, switches can be used to implement locally and globally distributed Web sites. URL switches must be able to exploit knowledge of server load and content (e.g., of reverse caches). Implementing globally distributed Web sites offers difficulties not present in local server clusters due to bandwidth and delay constraints in the Internet. With delayed load information, server selection methods based on choosing the least-loaded server will result in oscillations in network and server load. In this paper, methods that make effective use of delayed load information are described and evaluated. The new Pick-KX method is developed and shown to be better than existing methods. Load information is adjusted with probabilistic information using Bloom filter summaries of site content. A combined load and content metric is suggested for use for selecting the best server in a globally distributed site." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another approach is to to exclude servers that fail some utilization threshold and to choose from the remaining servers. @cite_6 and @cite_47 classify machines as lightly-utilized or heavily-utilized and then choose randomly from the lightly-utilized servers. This work focuses on local-area distributed systems. use this approach to enhance round-robin DNS load-balancing across a set of widely distributed heterogeneous web servers @cite_2 , Specifically, when a web server surpasses a utilization threshold it sends an alarm signal to the DNS system indicating it is out of commission. The server is excluded from the DNS resolution until it sends another signal indicating it is below threshold and free to service requests again. In this work, the maximum capacities of the most capable servers are at most a factor of three that of the least capable servers.
{ "cite_N": [ "@cite_47", "@cite_6", "@cite_2" ], "mid": [ "2155859954", "2080634500", "1747723070" ], "abstract": [ "The problem of judiciously and transparently redistributing the load of the system among its nodes so that overall performance is maximized is discussed. Several key issues in load distributing for general-purpose systems, including the motivations and design trade-offs for load-distributing algorithms, are reviewed. In addition, several load-distributing algorithms are described and their performances are compared. These algorithms are sender-initiated algorithms, receiver-initiated algorithms, symmetrically initiated algorithms, and adaptive algorithms. Load-distributing policies used in existing systems are examined, and conclusions about which algorithm might help in realizing the most benefits of load distributing are drawn. >", "In this paper, we study the performance characteristics of simple load sharing algorithms for heterogeneous distributed systems. We assume that nonnegligible delays are encountered in transferring jobs from one node to another. We analyze the effects of these delays on the performance of two threshold-based algorithms called Forward and Reverse. We formulate queuing theoretic models for each of the algorithms operating in heterogeneous systems under the assumption that the job arrival process at each node in Poisson and the service times and job transfer times are exponentially distributed. The models are solved using the Matrix-Geometric solution technique. These models are used to study the effects of different parameters and algorithm variations on the mean job response time: e.g., the effects of varying the thresholds, the impact of changing the probe limit, the impact of biasing the probing, and the optimal response times over a large range of loads and delays. Wherever relevant, the results of the models are compared with the M M 1 model, representing no load balancing (hereafter referred to as NLB), and the M M K model, which is an achievable lower bound (hereafter referred to as LB).", "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another well-studied load-balancing cluster approach is to have heavily loaded servers handoff requests they receive to other servers within the cluster that are less loaded or to have lightly loaded servers attempt to get tasks from heavily loaded servers (e.g., @cite_9 @cite_10 ). This can be achieved through techniques such as HTTP redirection (e.g., @cite_32 @cite_31 @cite_36 ) or packet header rewriting (e.g., @cite_24 ) or remote script execution @cite_43 . HTTP redirection adds additional client round-trip latency for every rescheduled request. TCP IP hand-off and packet header rewriting require changes in the OS kernel or network interface drivers. Remote script execution requires trust between the serving entities.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_43", "@cite_31", "@cite_10" ], "mid": [ "2151744612", "2149912409", "2159285576", "2143649597", "", "", "2115939782" ], "abstract": [ "Users of highly popular Web sites may experience long delays when accessing information. Upgrading content site infrastructure from a single node to a locally distributed Web cluster composed by multiple server nodes provides limited relief, because the cluster wide-area connectivity may become the bottleneck. A better solution is to distribute Web clusters over the Internet by placing content nodes in strategic locations. A geographically distributed architecture where the Domain Name System (DNS) servers evaluate network proximity and users are served from the closest cluster reduces network impact on response time. On the other hand, serving closest requests only may cause unbalanced servers and may increase system impact on response time. To achieve a scalable Web system, we propose to integrate DNS proximity scheduling with an HTTP request redirection mechanism that any Web server can activate. We demonstrate through simulation experiments that this further dispatching mechanism augments the percentage of requests with guaranteed response time, thereby enhancing the Quality of Service of geographically distributed Web sites. However, HTTP request redirection should be used selectively because the additional round-trip increases network impact on latency time experienced by users. As a further contribution, this paper proposes and compares various mechanisms to limit reassignments with no negative consequences on load balancing.", "Load sharing is a technique to improve the performance of distributed systems by distributing the system workload from heavily loaded nodes, where service is poor, to lightly loaded nodes in the system. Previous studies have considered two adaptive load sharing policies: sender-initiated and receiver-initiated. In the sender-initiated policy, a heavily loaded node attempts to transfer work to a lightly loaded node and in the receiver-initiated policy a lightly loaded node attempts to get work from a heavily loaded node. Almost all the previous studies assumed the first-come first-served node scheduling policy; furthermore, analysis and simulations in these studies have been done under the assumption that the job service times are exponentially distributed and the job arrivals form a Poisson process (i.e., job inter-arrival times are exponentially distributed). The goal of this paper is to fill the void in the existing literature. We study the impact of these assumptions on the performance of the sender-initiated and receiver initiated policies. We consider three node scheduling policies-first-come first-served (FCFS), shortest job first (SJF), and round robin (RR) policies. Furthermore, we also look at the impact of variance in the inter-arrival times and in the job service times. Our results show that: (i) When non-preemptive node scheduling policies (FCFS and SJF) are used, the receiver-initiated policy is (substantially) more sensitive to variance in inter-arrival times than the sender-initiated policies and the sender-initiated policies are relatively more sensitive to the variance in job service times; (ii) When the preemptive node scheduling policy (RR) is used, the sender-initiated policy provides a better performance than the receiver-initiated policy.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "We present and evaluate an implementation of a prototype scalable web server consisting of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using round robin DNS (RR-DNS) technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host-namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about the remaining hosts in the system. Load information is maintained using periodic multicast amongst the cluster hosts. Performance measurements suggest that our prototype outperforms both pure RR-DNS and the stateless DPR solutions.", "", "", "Two location policies that, by adapting to the system load, capture the advantages of receiver-initiated, sender-initiated, and symmetrically initiated algorithms are presented. A key feature of these location policies is that they are general and can be used in conjunction with a broad range of existing transfer policies. By means of simulation, two representative algorithms making use of these adaptive location policies are shown to be stable and to improve performance significantly relative to nonadaptive policies. >" ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
A lot of work has looked at balancing load across multi-server homogeneous web sites by leveraging the DNS service used to provide the mapping between a web page's URL and the IP address of a web server serving the URL. Round-robin DNS was proposed, where the DNS system maps requests to web servers in a round-robin fashion @cite_22 @cite_14 . Because DNS mappings have a Time-to-Live (TTL) field associated with them and tend to be cached at the local name server in each domain, this approach can lead to a large number of client requests from a particular domain getting mapped to the same web server during the TTL period. Thus, round-robin DNS achieves good balance only so long as each domain has the same client request rate. Moreover, round-robin load-balancing does not work in a heterogeneous peer-to-peer context because each serving replica gets a uniform rate of requests regardless of whether it can handle this rate. Work that takes into account domain request rate improves upon round-robin DNS and is described by @cite_34 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_34" ], "mid": [ "", "2037842078", "2138403394" ], "abstract": [ "", "Abstract While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster. The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.", "A distributed Web system, consisting of multiple servers for data retrieval and a Domain Name Server (DNS) for address resolution, can provide the scalability necessary to keep up with growing client demand at popular sites. However, balancing the requests among these atypical distributed servers opens interesting new challenges. Unlike traditional distributed systems in which a centralized scheduler has full control of the system, the DNS controls only a small fraction of the requests reaching the Web site. This makes it very difficult to avoid overloading situations among the multiple Web servers. We adapt traditional scheduling algorithms to the DNS, propose new policies, and examine their impact. Extensive simulation results show the advantage of using strategies that schedule requests on the basis of the origin of the clients and very limited state information, such as whether a server is overloaded or not. Conversely, algorithms that use detailed state information often exhibit the worst performance." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
later extend this work to balance load across a set of widely distributed heterogeneous web servers @cite_2 . This work proposes the use of adaptive TTLs, where the TTL for a DNS mapping is set inversely proportional to the domain's local client request rate for the mapping of interest (as reported by the domain's local name server). The TTL is at the same time set to be proportional to the chosen web server's maximum capacity. So web servers with high maximum capacity will have DNS mappings with longer TTLs, and domains with low request rates will receive mappings with longer TTLs. Max-Cap, the algorithm proposed in this thesis, also uses the maximum capacities of the serving replica nodes to allocate requests proportionally. The main difference is that in the work by , the root DNS scheduler acts as a centralized dispatcher setting all DNS mappings and is assumed to know what the request rate in the requesting domain is like. In the peer-to-peer case the authority node has no idea what the request rate throughout the network is like, nor how large is the set of requesting nodes.
{ "cite_N": [ "@cite_2" ], "mid": [ "1747723070" ], "abstract": [ "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Lottery scheduling is another technique that, like Max-Cap, uses proportional allocation. This approach has been proposed in the context of resource allocation within an operating system (the Mach microkernel) @cite_23 . Client processes hold tickets that give them access to particular resources in the operating system. Clients are allocated resources by a centralized lottery scheduler proportionally to the number of tickets they own and can donate their tickets to other clients in exchange for tickets at a later point. Max-Cap is similar in that it allocates requests to a replica node proportionally to the maximum capacity of the replica node. The main difference is that in Max-Cap the allocation decision is completely distributed with no opportunity for exchange of resources across replica nodes.
{ "cite_N": [ "@cite_23" ], "mid": [ "2111087562" ], "abstract": [ "This paper presents lottery scheduling, a novel randomized resource allocation mechanism. Lottery scheduling provides efficient, responsive control over the relative execution rates of computations. Such control is beyond the capabilities of conventional schedulers, and is desirable in systems that service requests of varying importance, such as databases, media-based applications, and networks. Lottery scheduling also supports modular resource management by enabling concurrent modules to insulate their resource allocation policies from one another. A currency abstraction is introduced to flexibly name, share, and protect resource rights. We also show that lottery scheduling can be generalized to manage many diverse resources, such as I O bandwidth, memory, and access to locks. We have implemented a prototype lottery scheduler for the Mach 3.0 microkernel, and found that it provides flexible and responsive control over the relative execution rates of a wide range of applications. The overhead imposed by our unoptimized prototype is comparable to that of the standard Mach timesharing policy." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Currently research in classification and clustering methods for XML or semi-structured documents is very active. New document models have been proposed by ( @cite_1 , @cite_7 ) to extend the classical vector model and take into account both the structure and the textual part. It amounts to distinguish words appearing in different types of XML elements in a generic way, while our approach uses the structure to select (manually) the type of elements relevant to a specific mining objective.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2134222424", "1970881376" ], "abstract": [ "In this paper, w edescribe a novel text classi er that can e ectiv ely cope with structured documents. We report experiments that compare its performance with that of a wellknown probabilistic classi er. Our novel classi er can take adv antage of the information in the structure of document that con ventional, purely term-based classi ers ignore.Conventional classi ers are mostly based on the vector space model of document, which views a document simply as an n-dimensional vector of terms. T o retain the information in the structure, w e ha ve dev eloped a structured vector model, which represents a document with a structured vector, whose elements can be either terms or other structured vectors. With this extended model, we also have improved the well-kno wn probabilistic classi cation method based on the Bernoulli document generation model. Our classi er based on these improvements performes signi cantly better on pre-classi ed samples from the web and the US Patent database than the usual classi ers.", "We propose a new statistical model for the classification of structured documents and consider its use for multimedia document classification. Its main originality is its ability to simultaneously take into account the structural and the content information present in a structured document, and also to cope with different types of content (text, image, etc). We present experiments on the classification of multilingual pornographic HTML pages using text and image data. The system accurately classifies porn sites from 8 European languages. This corpus has been developed by EADS company in the context of a large Web site filtering application." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
XML document clustering has been used mostly for visualizing large collections of documents, for example @cite_2 cluster AML (Astronomical Markup Language) documents based only on their links. @cite_3 propose a model similar to @cite_1 but adding in- and out-links to the model, and they use it for clustering rather than classification. @cite_4 also propose a BitCube model for clustering that represents documents based on their ePaths (paths of text elements) and textual content. Their focus is on evaluating time performance rather than clustering effectiveness.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "3603", "2134222424", "1575842006", "2084370216" ], "abstract": [ "In this paper, we describe a new bitmap indexing technique to cluster XML documents. XML is a new standard for exchanging and representing information on the Internet. Documents can be hierarchically represented by XML-elements. XML documents are represented and indexed using a bitmap indexing technique. We define the similarity and popularity operations available in bitmap indexes and propose a method for partitioning a XML document set. Furthermore, a 2-dimensional bitmap index is extended to a 3dimensional bitmap index, called BitCube. We define statistical measurements in the BitCube: mean, mode, standard derivation, and correlation coefficient. Based on these measurements, we also define the slice, project, and dice operations on a BitCube. BitCube can be manipulated efficiently and improves the performance of document retrieval.", "In this paper, w edescribe a novel text classi er that can e ectiv ely cope with structured documents. We report experiments that compare its performance with that of a wellknown probabilistic classi er. Our novel classi er can take adv antage of the information in the structure of document that con ventional, purely term-based classi ers ignore.Conventional classi ers are mostly based on the vector space model of document, which views a document simply as an n-dimensional vector of terms. T o retain the information in the structure, w e ha ve dev eloped a structured vector model, which represents a document with a structured vector, whose elements can be either terms or other structured vectors. With this extended model, we also have improved the well-kno wn probabilistic classi cation method based on the Bernoulli document generation model. Our classi er based on these improvements performes signi cantly better on pre-classi ed samples from the web and the US Patent database than the usual classi ers.", "A semi-structured document has more structured information compared to an ordinary document, and the relation among semi-structured documents can be fully utilized. In order to take advantage of the structure and link information in a semi-structured document for better mining, a structured link vector model (SLVM) is presented in this paper, where a vector represents a document, and vectors' elements are determined by terms, document structure and neighboring documents. Text mining based on SLVM is described in the procedure of K-means for briefness and clarity: calculating document similarity and calculating cluster center. The clustering based on SLVM performs significantly better than that based on a conventional vector space model in the experiments, and its F value increases from 0.65-0.73 to 0.82-0.86.", "Abstract Self-organization or clustering of data objects can be a powerful aid towards knowledge discovery in distributed databases. The web presents opportunities for such clustering of documents and other data objects. This potential will be even more pronounced when XML becomes widely used over the next few years. Based on clustering of XML links, we explore a visualization approach for discovering knowledge on the web." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Another direction is clustering Web documents returned as answers to a query, an alternative to rank lists. @cite_11 propose an original algorithm using a suffix tree structure, that is linear in the size of the collection and incremental, an important feature to support online clustering.
{ "cite_N": [ "@cite_11" ], "mid": [ "2100958137" ], "abstract": [ "Users of Web search engines are often forced to sift through the long ordered list of document \"snippets\" returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
@cite_5 compare different text feature extractions, and variants of a linear-time clustering algorithm using random seed selection with center adjustment.
{ "cite_N": [ "@cite_5" ], "mid": [ "2070412788" ], "abstract": [ "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter Gather (buckshot, fractionation, and split join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time quality tradeoff quantitatively." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
For routing on a circle, the best-known constructions have @math and @math . Examples include: Chord @cite_15 with distance-function @math , a variant of Chord with bidirectional links'' @cite_4 and distance-function @math , and the hypercube with distance function @math . In this paper, we improve upon all of these constructions by showing how to route in @math hops in the worst case with @math links per node.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2118428193", "2070219632" ], "abstract": [ "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
routing with distance function @math has been studied for Chord @cite_4 , a popular topology for P2P networks. Chord has @math nodes, with out-degree @math per node. The longest route takes @math hops. In terms of @math and @math , the largest-sized Chord network has @math nodes. Moreover, @math and @math cannot be chosen independently -- they are functionally related. Both @math and @math are @math . Analysis of routing of Chord leaves open the following question:
{ "cite_N": [ "@cite_4" ], "mid": [ "2070219632" ], "abstract": [ "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Xu al @cite_16 provide a partial answer to the above question by studying routing with distance function @math over graph topologies. A graph over @math nodes placed in a circle is said to be uniform if the set of clockwise offsets of out-going links is identical for all nodes. Chord is an example of a uniform graph. Xu al show that for any uniform graph with @math links per node, routing with distance function @math necessitates @math hops in the worst-case.
{ "cite_N": [ "@cite_16" ], "mid": [ "2031684765" ], "abstract": [ "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Cordasco al @cite_19 extend the result of Xu al @cite_16 by showing that routing with distance function @math in a uniform graph over @math nodes satisfies the inequality @math , where @math denotes the out-degree of each node, @math is the length of the longest path, and @math denotes the @math Fibonacci number. It is well-known that @math , where @math is the Golden ratio and @math denotes the integer closest to real number @math . It follows that @math . Cordasco al show that the inequality is strict if @math . For @math , they construct uniform graphs based upon Fibonacci numbers which achieve an optimal tradeoff between @math and @math .
{ "cite_N": [ "@cite_19", "@cite_16" ], "mid": [ "1599788664", "2031684765" ], "abstract": [ "We propose a family of novel schemes based on Chord retaining all positive aspects that made Chord a popular topology for routing in P2P networks. The schemes, based on the Fibonacci number system, allow to improve on the maximum average number of hops for lookups and the routing table size per node.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The results in @cite_4 @cite_16 @cite_19 leave open the question whether there exists any graph construction that permits routes of length @math with distance function @math and or @math . provides an answer to the problem by constructing a non-uniform graph --- the set of clockwise offsets of out-going links is different for different nodes.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_4" ], "mid": [ "1599788664", "2031684765", "2070219632" ], "abstract": [ "We propose a family of novel schemes based on Chord retaining all positive aspects that made Chord a popular topology for routing in P2P networks. The schemes, based on the Fibonacci number system, allow to improve on the maximum average number of hops for lookups and the routing table size per node.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Kleinberg's construction has found applications in the design of overlay routing networks for Distributed Hash Tables. Symphony @cite_13 is an adaptation of Kleinberg's construction in a single dimension. The idea is to place @math nodes in a virtual circle and to equip each node with @math out-going links. In the resulting network, the average path length of routes with distance function @math is @math hops. Note that unlike Kleinberg's network, the space here is virtual and so are the distances and the sense of routing. The same complexity was achieved with a slightly different Kleinberg-style construction by Aspnes al @cite_18 . In the same paper, it was also shown that any symmetric, randomized degree- @math network has @math routing complexity.
{ "cite_N": [ "@cite_18", "@cite_13" ], "mid": [ "1992467531", "1564854496" ], "abstract": [ "We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.", "We present Symphony, a novel protocol for maintaining distributed hash tables in a wide area network. The key idea is to arrange all participants along a ring and equip them with long distance contacts drawn from a family of harmonic distributions. Through simulation, we demonstrate that our construction is scalable, flexible, stable in the presence of frequent updates and offers small average latency with only a handful of long distance links per node. The cost of updates when hosts join and leave is small." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Papillon outperforms all of the above randomized constructions, using degree @math and achieving @math routing. It should be possible to randomize Papillon along similar principles to the Viceroy @cite_14 randomized construction of the butterfly network, though we do not pursue this direction here.
{ "cite_N": [ "@cite_14" ], "mid": [ "1970564778" ], "abstract": [ "We propose a family of constant-degree routing networks of logarithmic diameter, with the additional property that the addition or removal of a node to the network requires no global coordination, only a constant number of linkage changes in expectation, and a logarithmic number with high probability. Our randomized construction improves upon existing solutions, such as balanced search trees, by ensuring that the congestion of the network is always within a logarithmic factor of the optimum with high probability. Our construction derives from recent advances in the study of peer-to-peer lookup networks, where rapid changes require efficient and distributed maintenance, and where the lookup efficiency is impacted both by the lengths of paths to requested data and the presence or elimination of bottlenecks in the network." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
With @math out-going links per node, several graphs over @math nodes in a circle support routes with @math greedy hops. Deterministic graphs with this property include: (a) the original Chord @cite_15 topology with distance function @math , (b) Chord with edges treated as bidirectional @cite_4 with distance function @math . This is also the known lower bound on any uniform graph with distance function @math @cite_16 . Randomized graphs with the same tradeoff include randomized-Chord @cite_2 @cite_22 and Symphony @cite_13 -- both with distance function @math . With degree @math , Symphony @cite_13 has routes of length @math on average. The network of @cite_18 also supports routes of length @math on average , with a gap to the known lower bound on their network of @math .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "1992467531", "2070219632", "2049794981", "2096706512", "2118428193", "2031684765", "1564854496" ], "abstract": [ "We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.", "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure.", "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.", "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "We present Symphony, a novel protocol for maintaining distributed hash tables in a wide area network. The key idea is to arrange all participants along a ring and equip them with long distance contacts drawn from a family of harmonic distributions. Through simulation, we demonstrate that our construction is scalable, flexible, stable in the presence of frequent updates and offers small average latency with only a handful of long distance links per node. The cost of updates when hosts join and leave is small." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The construction demonstrates that we can indeed design networks in which greedy routing along these metrics has asymptotically optimal routing complexity. Our contribution is a family of networks that extends the Butterfly network family, so as to facilitate efficient greedy routing. With @math links per node, greedy routes are @math in the worst-case, which is asymptotically optimal. For @math , this beats the lower bound of @cite_18 on symmetric, randomized greedy routing networks (and it meets it for @math ). In the specific case of @math , our greedy routing achieves @math average route length.
{ "cite_N": [ "@cite_18" ], "mid": [ "1992467531" ], "abstract": [ "We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Recent work @cite_9 explores the surprising advantages of with in randomized graphs over @math nodes in a circle. The idea behind is to take neighbor's neighbors into account to make routing decisions. It shows that greedy with achieves @math expected route length in Symphony @cite_13 . For other networks which have @math out-going links per node, e.g., randomized-Chord @cite_2 @cite_22 , randomized-hypercubes @cite_2 , skip-graphs @cite_20 and SkipNet @cite_8 , average path length is @math hops. Among these networks, Symphony and randomized-Chord use routing with distance function @math . Other networks use a different distance function (none of them uses @math ). For each of these networks, with @math out-going links per node, it was established that plain ( ) is sub-optimal and achieves @math expected route lengths. The results suggest that lookahead has significant impact on routing.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2049794981", "1492552531", "2160405192", "2096706512", "1564854496", "" ], "abstract": [ "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "Scalable overlay networks such as Chord, CAN, Pastry, and Tapestry have recently emerged as flexible infrastructure for building large peer-to-peer systems. In practice, such systems have two disadvantages: They provide no control over where data is stored and no guarantee that routing paths remain within an administrative domain whenever possible. SkipNet is a scalable overlay network that provides controlled data placement and guaranteed routing locality by organizing data primarily by string names. SkipNet allows for both fine-grained and coarse-grained control over data placement: Content can be placed either on a pre-determined node or distributed uniformly across the nodes of a hierarchical naming sub-tree. An additional useful consequence of SkipNet's locality properties is that partition failures, in which an entire organization disconnects from the rest of the system, can result in two disjoint, but well-connected overlay networks.", "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.", "We present Symphony, a novel protocol for maintaining distributed hash tables in a wide area network. The key idea is to arrange all participants along a ring and equip them with long distance contacts drawn from a family of harmonic distributions. Through simulation, we demonstrate that our construction is scalable, flexible, stable in the presence of frequent updates and offers small average latency with only a handful of long distance links per node. The cost of updates when hosts join and leave is small.", "" ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
demonstrates that it is possible to construct a graph in which each node has degree @math and in which 1- has routes of length @math in the worst case, for the metrics @math , @math and @math . Furthermore, for all @math , plain greedy on our network design beats even the results obtained in @cite_9 with @math - lookahead .
{ "cite_N": [ "@cite_9" ], "mid": [ "2160405192" ], "abstract": [ "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Deterministic butterflies have been proposed for DHT routing by Xu al @cite_16 , who subsequently developed their ideas into Ulysses @cite_6 . for distance function @math has structural similarities with Ulysses -- both are butterfly-based networks. The key differences are as follows: (a) Ulysses does not use @math as its distance function, (b) Ulysses does not use routing, and (c) Ulysses uses more links than for distance function @math -- additional links have been introduced to ameliorate non-uniform edge congestion caused by Ulysses' routing algorithm. In contrast, the congestion-free routing algorithm developed in obviates the need for any additional links in (see Theorem ).
{ "cite_N": [ "@cite_16", "@cite_6" ], "mid": [ "2031684765", "2049130980" ], "abstract": [ "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI" ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Viceroy @cite_14 is a butterfly network which routes in @math hops in expectation with @math links per node. Mariposa (see reference @cite_23 or @cite_21 ) improves upon Viceroy by providing routes of length @math in the worst-case, with @math out-going links per node. Viceroy and Mariposa are different from other randomized networks in terms of their design philosophy. The topology borrows elements of the geometric embedding of the butterfly in a circle from Viceroy @cite_14 and from @cite_21 , while extending them for greedy routing.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_23" ], "mid": [ "1970564778", "2142418251", "69178526" ], "abstract": [ "We propose a family of constant-degree routing networks of logarithmic diameter, with the additional property that the addition or removal of a node to the network requires no global coordination, only a constant number of linkage changes in expectation, and a logarithmic number with high probability. Our randomized construction improves upon existing solutions, such as balanced search trees, by ensuring that the congestion of the network is always within a logarithmic factor of the optimum with high probability. Our construction derives from recent advances in the study of peer-to-peer lookup networks, where rapid changes require efficient and distributed maintenance, and where the lookup efficiency is impacted both by the lengths of paths to requested data and the presence or elimination of bottlenecks in the network.", "Routing topologies for distributed hashing in peer-to-peer networks are classified into two categories: deterministic and randomized. A general technique for constructing deterministic routing topologies is presented. Using this technique, classical parallel interconnection networks can be adapted to handle the dynamic nature of participants in peer-to-peer networks. A unified picture of randomized routing topologies is also presented. Two new protocols are described which improve average latency as a function of out-degree. One of the protocols can be shown to be optimal with high probability. Finally, routing networks for distributed hashing are revisited from a systems perspective and several open design problems are listed.", "Dipsea is a modular architecture for building a Distributed Hash Table (DHT). A DHT is a large hash table that is cooperatively maintained by a large number of machines communicating over the Internet. Decentralization and automatic re-configuration are two key design goals for a DHT. The architecture of Dipsea consists of three layers: ID Management, Overlay Routing and Data Management. The Overlay Routing layer consists of three modules: Emulation Engine, Ring Management and Choice of Long-Distance Links. Efficient algorithms for ID Management are designed—these algorithms require few messages, require few re-assignments of existing IDs and ensure that the hash table is divided among the participating machines as evenly as possible. Ring Management ensures that participating machines establish connections among themselves, as a function of their IDs, to form a fault-tolerant ring. The Emulation Engine is responsible for “emulation” of arbitrary families of routing networks. It handles issues arising out of dynamism (arrival and departure of participating machines), scale (variation in the average number of participating machines) and physical network proximity. Choice of Long-Distance Links allows a DHT to choose any family of routing networks (deterministic or randomized) for emulation. Several deterministic and randomized routing networks are designed and analyzed. Among these are Symphony (one of the first randomized DHT routing networks), Papillon (a deterministic routing network that guarantees asymptotically optimal route lengths with greedy routing for a fixed out-degree of nodes), and Mariposa (a randomized routing network that also guarantees optimal route lengths for a given out-degree of nodes)." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Ballintijn al argue that resource naming should be decoupled from resource identification @cite_7 . Resources are named with human-friendly names, which are based on DNS @cite_9 , while identification is done with object handles, which are globally unique identifiers that need not contain network locations. They use DNS to resolve human-friendly names to object handles and a location service to resolve object handles to network locations. The location service uses a hierarchical architecture for resolving object handles. This two-level approach allows the naming of resources without worrying about replication or migration and the identification of resources without worrying about naming policies.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2122269925", "2096392987" ], "abstract": [ "The Domain Name System (DNS) provides name service for the DARPA Internet. It is one of the largest name services in operation today, serves a highly diverse community of hosts, users, and networks, and uses a unique combination of hierarchies, caching, and datagram access. This paper examines the ideas behind the initial design of the DNS in 1983, discusses the evolution of these ideas into the current implementations and usages, notes conspicuous surprises, successes and shortcomings, and attempts to predict its future evolution.", "To fill the gap between what uniform resource names (URNs) provide and what humans need, we propose a new kind of uniform resource identifier (URI) called human-friendly names (HFNs). In this article, we present the design for a scalable HFN-to-URL (uniform resource locator) resolution mechanism that makes use of the Domain Name System (DNS) and the Globe location service to name and locate resources. This new URI proposes to improve both scalability and usability in naming replicated resources on the Web." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Walfish al argue for the use of semantic-free references for identifying web documents instead of URLs @cite_8 . The reason is that changes in naming policies or ownership of DNS domain names often result in previous URLs pointing to unrelated or non-existent documents, even when the original documents still exist. Semantic-free references are hashes of public keys or other data, and are resolved to URLs using a distributed hash table based on Chord @cite_13 . Using semantic-free references would allow web documents to link to each other without worrying about changes in the URLs of the documents.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2118428193", "144112633" ], "abstract": [ "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures--the Web is overly constrained by the limitations of DNS, and DNS is unduly burdened by the demands of the Web. There has been much commentary on this sad state-of-affairs, but dissolving the ill-fated union between DNS and the Web requires a new way to resolve Web references. To this end, this paper describes the design and implementation of Semantic Free Referencing (SFR), a reference resolution infrastructure based on distributed hash tables (DHTs)." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Distributed hash tables, also called peer-to-peer structured overlay networks, are distributed systems which map a uniform distribution of identifiers to nodes in the system @cite_3 @cite_13 @cite_20 . Nodes act as peers, with no node having to play a special role, and a distributed hash table can continue operation even as nodes join or leave the system. Lookups and updates to a distributed hash table are scalable, typically taking time logarithmic to the number of nodes in the system. We experimentally evaluated our work using OpenDHT @cite_12 , which is a public distributed hash table service based on Bamboo @cite_5 .
{ "cite_N": [ "@cite_3", "@cite_5", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "2160279333", "2162733677", "2118428193", "2143339817", "2123482462" ], "abstract": [ "Distributed computer architectures labeled \"peer-to-peer\" are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance.Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content.In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in---and affected by---the architectural design decisions adopted by current peer-to-peer systems.We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes.", "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations.", "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "Large-scale distributed systems are hard to deploy, and distributed hash tables (DHTs) are no exception. To lower the barriers facing DHT-based applications, we have created a public DHT service called OpenDHT. Designing a DHT that can be widely shared, both among mutually untrusting clients and among a variety of applications, poses two distinct challenges. First, there must be adequate control over storage allocation so that greedy or malicious clients do not use more than their fair share. Second, the interface to the DHT should make it easy to write simple clients, yet be sufficiently general to meet a broad spectrum of application requirements. In this paper we describe our solutions to these design challenges. We also report our early deployment experience with OpenDHT and describe the variety of applications already using the system.", "We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
There has also been research on implementing distributed hash tables on top of mobile ad hoc networks @cite_4 @cite_6 . As with Mobile IP @cite_15 and HIP @cite_1 , hosts in mobile ad hoc networks do not change their network address with movement, so there would be no need to update entries in a distributed hash table used for resolving resource identifiers. However, almost the entire Internet is not part of a mobile ad hoc network, so it is of little help to applications that need to run on current networks.
{ "cite_N": [ "@cite_1", "@cite_15", "@cite_4", "@cite_6" ], "mid": [ "", "32990903", "2123181643", "2151682391" ], "abstract": [ "", "A test device using the \"four square\" principal of testing specimens which permits input of dynamic torque changes and being arranged to eliminate the inertia effects of the mass of connecting gears or mechanisms between the parallel members of the four square test device.", "While unstructured P2P systems have been embraced widely in mobile ad-hoc networks (MANETs), the applicability of structured approaches like distributed hash tables (DHTs) to such settings remains controversial. Existing research delivers promising empirical results addressing the concerns about performance, complexity, and reliability, but does not analyze the principles of combining DHTs and MANETs. This paper identifies and discusses the fundamental implications of non-infrastructural networks for DHTs and analyzes solutions to these challenges.", "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks." ] }
0706.0430
2950884312
As decentralized computing scenarios get ever more popular, unstructured topologies are natural candidates to consider running mix networks upon. We consider mix network topologies where mixes are placed on the nodes of an unstructured network, such as social networks and scale-free random networks. We explore the efficiency and traffic analysis resistance properties of mix networks based on unstructured topologies as opposed to theoretically optimal structured topologies, under high latency conditions. We consider a mix of directed and undirected network models, as well as one real world case study -- the LiveJournal friendship network topology. Our analysis indicates that mix-networks based on scale-free and small-world topologies have, firstly, mix-route lengths that are roughly comparable to those in expander graphs; second, that compromise of the most central nodes has little effect on anonymization properties, and third, batch sizes required for warding off intersection attacks need to be an order of magnitude higher in unstructured networks in comparison with expander graph topologies.
Borisov @cite_11 analyzes anonymous communications over a De Bruijn graph topology overlay network. He analyzes the deBruijn graph topology and comments on their successful mixing capabilities.
{ "cite_N": [ "@cite_11" ], "mid": [ "2163598416" ], "abstract": [ "As more of our daily activities are carried out online, it becomes important to develop technologies to protect our online privacy. Anonymity is a key privacy technology, since it serves to hide patterns of communication that can often be as revealing as their contents. This motivates our study of the use of large scale peer-to-peer systems for building anonymous systems. We first develop a novel methodology for studying the anonymity of peer-to-peer systems, based on an information-theoretic anonymity metric and simulation. We use simulations to sample a probability distribution modeling attacker knowledge under conservative assumptions and estimate the entropy-based anonymity metric using the sampled distribution. We then validate this approach against an analytic method for computing entropy. The use of sampling introduces some error, but it can be accurately bounded and therefore we can make rigorous statements about the success of an entire class of attacks. We next apply our methodology to perform the first rigorous analysis of Freenet, a peer-to-peer anonymous publishing system, and identify a number of weaknesses in its design. We show that a targeted attack on high-degree nodes can be very effective at reducing anonymity. We also consider a next generation routing algorithm proposed by the Freenet authors to improve performance and show that it has a significant negative impact on anonymity. Finally, even in the best case scenario, the anonymity levels provided by Freenet are highly variable and, in many cases, little or no anonymity is achieved. To provide more uniform anonymity protection, we propose a new design for peer-to-peer anonymous systems based on structured overlays. We use random walks along the overlay to provide anonymity. We compare the mixing times of random walks on different graph structures and find that de Bruijn graphs are superior to other structures such as the hypercube or butterfly. Using our simulation methodology, we analyze the anonymity achieved by our design running on top of Koorde, a structured overlay based on de Bruijn graphs. We show that it provides anonymity competitive with Freenet in the average case, while ensuring that worst-case anonymity remains at an acceptable level. We also maintain logarithmic guarantees on routing performance." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The chief alternative to iterative approximation is to produce an exact propositional characterization of the abstract transition relation. For example the method of @cite_3 uses small-domain techniques to translate a first-order transition formula into a propositional one that is equisatisfiable over the state-holding predicates. However, this translation introduces a large number of auxiliary Boolean variables, making it impractical to use BDD-based methods for image computation. Though SAT-base Boolean quantifier elimination methods can be used, the effect is still essentially to enumerate the states in the image. By contrast, the interpolation-based method produces an approximate transition relation with no auxiliary Boolean variables, allowing efficient use of BDD-based methods.
{ "cite_N": [ "@cite_3" ], "mid": [ "1601517679" ], "abstract": [ "Predicate abstraction is a useful form of abstraction for the verification of transition systems with large or infinite state spaces. One of the main bottlenecks of this approach is the extremely large number of decision procedures calls that are required to construct the abstract state space. In this paper we propose the use of a symbolic decision procedure and its application for predicate abstraction. The advantage of the approach is that it reduces the number of calls to the decision procedure exponentially and also provides for reducing the re-computations inherent in the current approaches. We provide two implementations of the symbolic decision procedure: one based on BDDs which leverages the current advances in early quantification algorithms, and the other based on SAT-solvers. We also demonstrate our approach with quantified predicates for verifying parameterized systems. We illustrate the effectiveness of this approach on benchmarks from the verification of microprocessors, communication protocols, parameterized systems, and Microsoft Windows device drivers." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The most closely related method is that of Das and Dill @cite_7 . This method analyzes abstract counterexamples (sequences of predicate states), refining the transition relation approximation in such a way as to rule out infeasible transitions. This method is effective, but has the disadvantage that it uses a specific counterexample and does not consider the property being verified. Thus it can easily generate refinements not relevant to the property. The interpolation-based method does not use abstract counterexamples. Rather, it generates facts relevant to proving the given property in a bounded sense. Thus, it tends to generate more relevant refinements, and as a result converges more rapidly.
{ "cite_N": [ "@cite_7" ], "mid": [ "2134147303" ], "abstract": [ "Recently, we have improved the efficiency of the predicate abstraction scheme presented by Das, Dill and Park (1999). As a result, the number of validity checks needed to prove the necessary verification condition has been reduced. The key idea is to refine an approximate abstract transition relation based on the counter-example generated. The system starts with an approximate abstract transition relation on which the verification condition (in our case, this is a safety property) is model-checked. If the property holds then the proof is done; otherwise the model checker returns an abstract counter-example trace. This trace is used to refine the abstract transition relation if possible and start anew. At the end of the process, the system either proves the verification condition or comes up with an abstract counter-example trace which holds in the most accurate abstract transition relation possible (with the user-provided predicates as a basis). If the verification condition fails in the abstract system, then either the concrete system does not satisfy it or the abstraction predicates chosen are not strong enough. This algorithm has been used on a concurrent garbage collection algorithm and a secure contract-signing protocol. This method improved the performance on the first problem significantly, and allowed us to tackle the second problem, which the previous method could not handle." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
@cite_8 , interpolants are used to choose new predicates to refine a predicate abstraction. Here, we use interpolants to refine an approximation of the abstract transition relation for a given set of predicates.
{ "cite_N": [ "@cite_8" ], "mid": [ "2151463894" ], "abstract": [ "The success of model checking for large programs depends crucially on the ability to efficiently construct parsimonious abstractions. A predicate abstraction is parsimonious if at each control location, it specifies only relationships between current values of variables, and only those which are required for proving correctness. Previous methods for automatically refining predicate abstractions until sufficient precision is obtained do not systematically construct parsimonious abstractions: predicates usually contain symbolic variables, and are added heuristically and often uniformly to many or all control locations at once. We use Craig interpolation to efficiently construct, from a given abstract error trace which cannot be concretized, a parsominous abstraction that removes the trace. At each location of the trace, we infer the relevant predicates as an interpolant between the two formulas that define the past and the future segment of the trace. Each interpolant is a relationship between current values of program variables, and is relevant only at that particular program location. It can be found by a linear scan of the proof of infeasibility of the trace.We develop our method for programs with arithmetic and pointer expressions, and call-by-value function calls. For function calls, Craig interpolation offers a systematic way of generating relevant predicates that contain only the local variables of the function and the values of the formal parameters when the function was called. We have extended our model checker Blast with predicate discovery by Craig interpolation, and applied it successfully to C programs with more than 130,000 lines of code, which was not possible with approaches that build less parsimonious abstractions." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
There exists a significant body of literature for networks with Poisson distributed nodes. In @cite_6 the characteristic function of the interference was obtained when there is no fading and the nodes are Poisson distributed. They also provide the probability distribution function of the interference as an infinite series. , in @cite_2 , analyze the interference when the interference contribution by a transmitter located at @math , to a receiver located at the origin is exponentially distributed with parameter @math . Using this model they derive the density function of the interference when the nodes are arranged as a one dimensional lattice. Also the Laplace transform of the interference is obtained when the nodes are Poisson distributed.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2171882038", "2042164227" ], "abstract": [ "The authors obtain the optimum transmission ranges to maximize throughput for a direct-sequence spread-spectrum multihop packet radio network. In the analysis, they model the network self-interference as a random variable which is equal to the sum of the interference power of all other terminals plus background noise. The model is applicable to other spread-spectrum schemes where the interference of one user appears as a noise source with constant power spectral density to the other users. The network terminals are modeled as a random Poisson field of interference power emitters. The statistics of the interference power at a receiving terminal are obtained and shown to be the stable distributions of a parameter that is dependent on the propagation power loss law. The optimum transmission range in such a network is of the form CK sup alpha where C is a constant, K is a function of the processing gain, the background noise power spectral density, and the degree of error-correction coding used, and alpha is related to the power loss law. The results obtained can be used in heuristics to determine optimum routing strategies in multihop networks. >", "This paper deals with the distribution of cumulated instantaneous interference power in a Rayleigh fading channel for an infinite number of interfering stations, where each station transmits with a certain probability, independently of all others. If all distances are known, a necessary and sufficient condition is given for the corresponding distribution to be nondefective. Explicit formulae of density and distribution functions are obtained in the interesting special case that interfering stations are located on a linear grid. Moreover, the Laplace transform of cumulated power is investigated when the positions of stations follow a one- or two-dimensional Poisson process. It turns out that the corresponding distribution is defective for the two-dimensional models." ] }