doc_id
stringlengths
4
40
title
stringlengths
7
300
abstract
stringlengths
2
10k
corpus_id
uint64
171
251M
092ee36604edde54049f14fd1892f7f07c77c650
Learning Bayesian Networks from Data
Statistics, Pattern Recognition and Information Theory There are many books on statistics. We find [DeGroot 1970] to be a good introduction to statistics and Bayesian statistics in particular. A more recent book [Gelman et al. 1995] is also a good introduction to this field and also discusses recent advances, such as hierarchical priors. Books in pattern recognition, including the classic [Duda and Hart 1973] and the more recent [Bishop 1995], cover basic issues in density estimation and their use for pattern recognition and classification. A good introduction to information theory, and notions such as KL divergence and mutual information can be found in [Cover and Thomas 1991].
60,722,345
2174d4c7a56bb79b3c1e17bc38b88bb4780b42b2
Active Learning for Structure in Bayesian Networks
The task of causal structure discovery from empirical data is a fundamental problem in many areas. Experimental data is crucial for accomplishing this task. However, experiments are typically expensive, and must be selected with great care. This paper uses active learning to determine the experiments that are most informative towards uncovering the underlying structure. We formalize the causal learning task as that of learning the structure of a causal Bayesian network. We consider an active learner that is allowed to conduct experiments, where it intervenes in the domain by setting the values of certain variables. We provide a theoretical framework for the active learning problem, and an algorithm that actively chooses the experiments to perform based on the model learned so far. Experimental results show that active learning can substantially reduce the number of observations required to determine the structure of a domain.
8,155,128
29878cba122f390a390e27832cd6f3995f85403b
Probabilistic Abstraction Hierarchies
Many domains are naturally organized in an abstraction hierarchy or taxonomy, where the instances in "nearby" classes in the taxonomy are similar. In this paper, we provide a general probabilistic framework for clustering data into a set of classes organized as a taxonomy, where each class is associated with a probabilistic model from which the data was generated. The clustering algorithm simultaneously optimizes three things: the assignment of data instances to clusters, the models associated with the clusters, and the structure of the abstraction hierarchy. A unique feature of our approach is that it utilizes global optimization algorithms for both of the last two steps, reducing the sensitivity to noise and the propensity to local maxima that are characteristic of algorithms such as hierarchical agglomerative clustering that only take local steps. We provide a theoretical analysis for our algorithm, showing that it converges to a local maximum of the joint likelihood of model and data. We present experimental results on synthetic data, and on real data in the domains of gene expression and text.
12,378,099
311f963e8bee972f13b9dc6c59f139ee3c733582
Probabilistic Classification and Clustering in Relational Data
Supervised and unsupervised learning methods have traditionally focused on data consisting of independent instances of a single type. However, many real-world domains are best described by relational models in which instances of multiple types are related to each other in complex ways. For example, in a scientific paper domain, papers are related to each other via citation, and are also related to their authors. In this case, the label of one entity (e.g., the topic of the paper) is often correlated with the labels of related entities. We propose a general class of models for classification and clustering in relational domains that capture probabilistic dependencies between related instances. We show how to learn such models efficiently from data. We present empirical results on two real world data sets. Our experiments in a transductive classification setting indicate that accuracy can be significantly improved by modeling relational dependencies. Our algorithm automatically induces a very natural behavior, where our knowledge about one instance helps us classify related ones, which in turn help us classify others. In an unsupervised setting, our models produced coherent clusters with a very natural interpretation, even for instance types that do not have any attributes.
3,168,358
32b93e4485796519fc3478085d02c6866c5e6c5f
Learning an Agent's Utility Function by Observing Behavior
null
14,339,861
42c3aec2a30fbe7371825fcef18f3d8649d9127f
Solving Factored POMDPs with Linear Value Functions
Partially Observable Markov Decision Processes (POMDPs) provide a coherent mathematical framework for planning under uncertainty when the state of the system cannot be fully observed. However, the problem of finding an exact POMDP solution is intractable. Computing such solution requires the manipulation of a piecewise linear convex value function, which specifies a value for each possible belief state. This value function can be represented by a set of vectors, each one with dimension equal to the size of the state space. In nontrivial problems, however, these vectors are too large for such a representation to be feasible, preventing the use of exact POMDP algorithms. We propose an approximation scheme where each vector is represented as a linear combination of basis functions to provide a compact approximation to the value function. We also show that this representation can be exploited to allow for efficient computations in approximate value and policy iteration algorithms in the context of factored POMDPs, where the transition model is specified using a dynamic Bayesian network.
14,800,848
435a54890e2d463985d6e476a162ec750796b595
Rich probabilistic models for gene expression
Clustering is commonly used for analyzing gene expression data. Despite their successes, clustering methods suffer from a number of limitations. First, these methods reveal similarities that exist over all of the measurements, while obscuring relationships that exist over only a subset of the data. Second, clustering methods cannot readily incorporate additional types of information, such as clinical data or known attributes of genes. To circumvent these shortcomings, we propose the use of a single coherent probabilistic model, that encompasses much of the rich structure in the genomic expression data, while incorporating additional information such as experiment type, putative binding sites, or functional information. We show how this model can be learned from the data, allowing us to discover patterns in the data and dependencies between the gene expression patterns and additional attributes. The learned model reveals context-specific relationships, that exist only over a subset of the experiments in the dataset. We demonstrate the power of our approach on synthetic data and on two real-world gene expression data sets for yeast. For example, we demonstrate a novel functionality that falls naturally out of our framework: predicting the "cluster" of the array resulting from a gene mutation based only on the gene's expression pattern in the context of other mutations.
6,000,575
45a5db95ee9099ca9848278d132bb554978d0bba
Sampling in Factored Dynamic Systems
In many real-world situations, we are interested in monitoring the evolution of a complex situation over time. For example, we may be monitoring a patient’s vital signs in an intensive care unit (Dagum and Galper 1995), analyzing a complex freeway traffic scene with the goal of controlling a moving vehicle (Huang, Koller, Malik, Ogasawara, Rao, Russell and Weber 1994), localizing a robot in a complex environment (Fox, Burgard and Thrun 1999) (see also Murphy and Russell (2001: this volume)), or tracking motion of non-rigid objects in a cluttered visual scene (Isard and Blake 1998a). We treat such systems as being in one of a possible set of states, where the state changes over time. We model the states as changing at discrete time intervals, so that x t is the state of the system at time t. In most systems, we model the system states as having some internal structure: the system state is typically represented by some vector of variables X = (X 1,...,X n ), where each X i takes on values in some space Dom[X i ]. The possible states x are assignments of values to the variables X. In a traffic surveillance application, the state might contain variables such as the vehicle position, its velocity, the weather, and more.
117,092,006
462ebf9d2eeed6c907152e66061496ff02cfbf36
Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
null
61,224,218
4cc1ce96bfa2ad8af16dbd0c2356a2cb5a476c24
Learning Probabilistic Models of Relational Structure
Most real-world data is stored in relational form. In contrast, most statistical learning methods work with “flat” data representations, forcing us to convert our data into a form that loses much of the relational structure. The recently introduced framework of probabilistic relational models (PRMs) allows us to represent probabilistic models over multiple entities that utilize the relations between them. In this paper, we propose the use of probabilistic models not only for the attributes in a relational model, but for the relational structure itself. We propose two mechanisms for modeling structural uncertainty: reference uncertainty and existence uncertainty. We describe the appropriate conditions for using each model and present learning algorithms for each. We present experimental results showing that the learned models can be used to predict relational structure and, moreover, the observed relational structure can be used to provide better predictions for the attributes in the model.
10,551,607
510e5d1244d32e54fc10f3771bb7ca7b94993542
Active learning: theory and applications
null
62,018,951
6068dfc41ce14def1ac1ff2fa5abcb3ba65a0d00
UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, University of Washington, Seattle, Washington, USA, August 2-5, 2001
By reading, you can know the knowledge and things more, not only about what you get from people to people. Book will be more trusted. As this uai 01 proceedings of the 17th conference in uncertainty in artificial intelligence, it will really give you the good idea to be successful. It is not only for you to be success in certain life you can be successful in everything. The success can be started by knowing the basic knowledge and do actions.
39,059,719
6918184efb552a42535c819d71f935c05607e310
Exact Inference in Networks with Discrete Children of Continuous Parents
Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks (BNs). An important subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment to the discrete variables is a multivariate Gaussian. Lauritzen's extension to the clique tree algorithm can be used for exact inference in CLG networks. However, many domains include discrete variables that depend on continuous ones, and CLG networks do not allow such dependencies to be represented. In this paper, we propose the first "exact" inference algorithm for augmented CLG networks -- CLG networks augmented by allowing discrete children of continuous parents. Our algorithm is based on Lauritzen's algorithm, and is exact in a similar sense: it computes the exact distributions over the discrete nodes, and the exact first and second moments of the continuous ones, up to inaccuracies resulting from numerical integration used within the algorithm. In the special case of softmax CPDs, we show that integration can often be done efficiently, and that using the first two moments leads to a particularly accurate approximation. We show empirically that our algorithm achieves substantially higher accuracy at lower cost than previous algorithms for this task.
14,174,028
69b78378f8a98ee49358f79984873842f0961b8d
Max-norm Projections for Factored MDPs
Markov Decision Processes (MDPs) provide a coherent mathematical framework for planning under uncertainty. However, exact MDP solution algorithms require the manipulation of a value function, which specifies a value for each state in the system. Most real-world MDPs are too large for such a representation to be feasible, preventing the use of exact MDP algorithms. Various approximate solution algorithms have been proposed, many of which use a linear combination of basis functions as a compact approximation to the value function. Almost all of these algorithms use an approximation based on the (weighted) L2-norm (Euclidean distance); this approach prevents the application of standard convergence results for MDP algorithms, all of which are based on max-norm. This paper makes two contributions. First, it presents the first approximate MDP solution algorithms - both value and policy iteration - that use max-norm projection, thereby directly optimizing the quantity required to obtain the best error bounds. Second, it shows how these algorithms can be applied efficiently in the context of factored MDPs, where the transition model is specified using a dynamic Bayesian network.
277,832
851e3f1780a89d38505fc0887a58a9fc3c7b493e
Selectivity estimation using probabilistic models
Estimating the result size of complex queries that involve selection on multiple attributes and the join of several relations is a difficult but fundamental task in database query processing. It arises in cost-based query optimization, query profiling, and approximate query answering. In this paper, we show how probabilistic graphical models can be effectively used for this task as an accurate and compact approximation of the joint frequency distribution of multiple attributes across multiple relations. Probabilistic Relational Models (PRMs) are a recent development that extends graphical statistical models such as Bayesian Networks to relational domains. They represent the statistical dependencies between attributes within a table, and between attributes across foreign-key joins. We provide an efficient algorithm for constructing a PRM front a database, and show how a PRM can be used to compute selectivity estimates for a broad class of queries. One of the major contributions of this work is a unified framework for the estimation of queries involving both select and foreign-key join operations. Furthermore, our approach is not limited to answering a small set of predetermined queries; a single model can be used to effectively estimate the sizes of a wide collection of potential queries across multiple tables. We present results for our approach on several real-world databases. For both single-table multi-attribute queries and a general class of select-join queries, our approach produces more accurate estimates than standard approaches to selectivity estimation, using comparable space and time.
3,053,207
9248333249450ef58b2c809c095235906b2809e2
Structured models for multi-agent interactions
The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in real-world games. In this paper, we propose a new representation language for general multi-player noncooperative games --- multi-agent influence diagrams (MAIDs). This representation extends graphical models for probability distributions to a multi-agent decision-making context. The basic elements in the MAID representation are variables rather than strategies (as in the normal form) or events (as in the extensive form). They can thus explicitly encode structure involving the dependence relationships among variables. As a consequence, we can define a notion of strategic relevence of one decision variable to another. D' is strategically relevant to D if, to optimize the decision rule at D, the decision maker needs to take into consideration the decision rule at D. We provide a sound and complete graphical criterion for determining strategic relevance. We then show how strategic relevance can be used to detect structure in games, allowing large games to be broken up into a set of interacting smaller games, which can be solved in sequence. We show that this decomposition can lead to substantial savings in the computational cost of finding Nash equilibria in these games.
2,475,732
98d3eb412957a354d775d39bb6b23f15adbe0986
Uncertainty in artificial intelligence : proceedings of the Seventeenth Conference (2001) : August 2-5, 2001, University of Washington, Seattle, Washington
null
160,847,685
991dc062c3fcfbe98e27e769843548be63fa8299
Probabilistic Models of Text and Link Structure for Hypertext Classification
Most text classification methods treat each document as an independent instance. However, in many text domains, documents are linked and the topics of linked documents are correlated. For example, web pages of related topics are often connected by hyperlinks and scientific papers from related fields are commonly linked by citations. We propose a unified probabilistic model for both the textual content and the link structure of a document collection. Our model is based on the recently introduced framework of Probabilistic Relational Models (PRMs), which allows us to capture correlations between linked documents. We show how to learn these models from data and use them efficiently for classification. Since exact methods for classification in these large models are intractable, we utilize belief propagation, an approximate inference algorithm. Belief propagation automatically induces a very natural behavior, where our knowledge about one document helps us classify related ones, which in turn help us classify others. We present preliminary empirical results on a dataset of university web pages.
12,897,830
a8797f1d253c75669d96e6fcceda2be3f8534e1d
Support Vector Machine Active Learning with Applications to Text Classification
Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
7,806,109
b31be41841d83b6e818355aa60a22befac099e0a
Multi-Agent Influence Diagrams for Representing and Solving Games
null
85,889
b7694f3798944a40e3e3572d668ec371d76b35dc
Selectivity Estimation using Probabilistic Models
null
3,053,207
c0fda2ab14245ca3775fdfc9b0fef1d309dfdb04
Multiagent Planning with Factored MDPs
We present a principled and efficient planning algorithm for cooperative multiagent dynamic systems. A striking feature of our method is that the coordination and communication between the agents is not imposed, but derived directly from the system dynamics and function approximation architecture. We view the entire multiagent system as a single, large Markov decision process (MDP), which we assume can be represented in a factored way using a dynamic Bayesian network (DBN). The action space of the resulting MDP is the joint action space of the entire set of agents. Our approach is based on the use of factored linear value functions as an approximation to the joint value function. This factorization of the value function allows the agents to coordinate their actions at runtime using a natural message passing scheme. We provide a simple and efficient method for computing such an approximate value function by solving a single linear program, whose size is determined by the interaction between the value function structure and the DBN. We thereby avoid the exponential blowup in the state and action space. We show that our approach compares favorably with approaches based on reward sharing. We also show that our algorithm is an efficient alternative to more complicated algorithms even in the single agent case.
6,487,585
0489e71af49c919f691c562c1ba0058bd2da136a
Support Vector Machine Active Learning with Application sto Text Classification
null
34,678,574
0680445a1a21d56189d062d42cc19529d743eb77
Learning Probabilistic Relational Models with Structural Uncertainty
Most real-world data is stored in relational form. In contrast, most statistical learning methods, e.g., Bayesian network learning, work only with “flat” data representations, forcing us to convert our data into a form that loses much of the relational structure. The recently introduced framework of probabilistic relational models(PRMs) allow us to represent much richer dependency structures, involving multiple entities and the relations between them; they allow the properties of an entity to depend probabilistically on properties of related entities. Friedman et al. showed how to learn PRMs that model the attribute uncertainty in relational data, and presented techniques for learning both parameters and probabilistic dependency structure for the attributes in a relational model. In this work, we propose methods for handling structural uncertainty in PRMs. Structural uncertainty is uncertainty over which entities are related in our domain. We propose two mechanisms for modeling structural uncertainty: reference uncertaintyand existence uncertainty. We describe the appropriate conditions for using each model and present learning algorithms for each. We conclude with some preliminary experimental results comparing and contrasting the use of these mechanism for learning PRMs in domains with structural uncertainty.
14,657,886
06a154b63c9e49840ded076ebe9e9915ea672e99
Restricted Bayes Optimal Classifiers
We introduce the notion of restricted Bayes optimal classifiers . These classifiers attempt to combine the flexibility of the generative approach to classification with the high accuracy associated with discriminative learning. They first create a model of the joint distribution over class labels and features. Instead of choosing the decision boundary induced directly from the model, they restrict the allowable types of decision boundaries and learn the one that minimizes the probability of misclassification relative to the estimated joint distribution. In this paper, we investigate two particular instantiations of this approach. The first uses a non-parametric density estimator — Parzen Windows with Gaussian kernels — and hyperplane decision boundaries. We show that the resulting classifier is asymptotically equivalent to a maximal margin hyperplane classifier, a highly successful discriminative classifier. We therefore provide an alternative justification for maximal margin hyperplane classifiers. The second instantiation uses a mixture of Gaussians as the estimated density; in experiments on real-world data, we show that this approach allows data with missing values to be handled in a principled manner, leading to improved performance over regular discriminative approaches.
8,223,567
17714c1a50e306227cd5cd56af0bc203c7e43db7
Active Learning for Parameter Estimation in Bayesian Networks
Bayesian networks are graphical representations of probability distributions. In virtually all of the work on learning these networks, the assumption is that we are presented with a data set consisting of randomly generated instances from the underlying distribution. In many situations, however, we also have the option of active learning, where we have the possibility of guiding the sampling process by querying for certain types of samples. This paper addresses the problem of estimating the parameters of Bayesian networks in an active learning setting. We provide a theoretical framework for this problem, and an algorithm that chooses which active learning queries to generate based on the model learned so far. We present experimental results showing that our active learning algorithm can significantly reduce the need for training data in many situations.
2,386,340
2260208e8e2ae1ad01552900081e8e322fd6e5fc
Policy Iteration for Factored MDPs
Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has suggested that value functions in factored MDPs can often be approximated well using a factored value function: a linear combination of restricted basis functions, each of which refers only to a small subset of variables. An approximate factored value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement. We present a new approach to value determination, that uses a simple closed-form computation to compute a least-squares decomposed approximation to the value function for any weights directly. We then use this value determination algorithm as a subroutine in a policy iteration process. We show that, under reasonable restrictions, the policies induced by a factored value function can be compactly represented as a decision list, and can be manipulated efficiently in a policy iteration process. We also present a method for computing error bounds for decomposed value functions using a variableelimination algorithm for function optimization. The complexity of all of our algorithms depends on the factorization of the system dynamics and of the approximate value function.
47,383,473
39bb60618428e664c0f171c1658985fb40ac135b
Making Rational Decisions Using Adaptive Utility Elicitation
Rational decision making requires full knowledge of the utility function of the person affected by the decisions. However, in many cases, the task of acquiring such knowledge is not feasible due to the size of the outcome space and the complexity of the utility elicitation process. Given that the amount of utility information we can acquire is limited, we need to make decisions with partial utility information and should carefully select which utility elicitation questions we ask. In this paper, we propose a new approach for this problem that utilizes a prior probability distribution over the person’s utility function, perhaps learned from a population of similar people. The relevance of a utility elicitation question for the current decision problem can then be measured using its value of information. We propose an algorithm that interleaves the analysis of the decision problem and utility elicitation to allow these two tasks to inform each other. At every step, it asks the utility elicitation question giving us the highest value of information and computes the best strategy based on the information acquired so far, stopping when the expected utility loss resulting from our recommendation falls below a pre-specified threshold. We show how the various steps of this algorithm can be implemented efficiently.
7,774,476
4bbcc8d670f1f083ac208b50f317a956d635b25a
Probabilistic Models for Agent's Beliefs and Decisions
Many applications of intelligent systems require reasoning about the mental states of agents in the domain. We may want to reason about an agent's beliefs, including beliefs about other agents; we may also want to reason about an agent's preferences, and how his beliefs and preferences relate to his behavior. We define a probabilistic epistemic logic (PEL) in which belief statements are given a formal semantics, and provide an algorithm for asserting and querying PEL formulas in Bayesian networks. We then show how to reason about an agent's behavior by modeling his decision process as an influence diagram and assuming that he behaves rationally. PEL can then be used for reasoning from an agent's observed actions to conclusions about other aspects of the domain, including unobserved domain variables and the agent's mental states.
2,459,903
4e8495f395beb75bec15ca3ace8a12daa7cb86ea
Discovering Hidden Variables: A Structure-Based Approach
A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. As such, they induce seemingly complex dependencies among the latter. In recent years, much attention has been devoted to the development of algorithms for learning parameters, and in some cases structure, in the presence of hidden variables. In this paper, we address the related problem of detecting hidden variables that interact with the observed variables. This problem is of interest both for improving our understanding of the domain and as a preliminary step that guides the learning procedure towards promising models. A very natural approach is to search for "structural signatures" of hidden variables - substructures in the learned network that tend to suggest the presence of a hidden variable. We make this basic idea concrete, and show how to integrate it with structure-search algorithms. We evaluate this method on several synthetic and real-life datasets, and show that it performs surprisingly well.
1,440,864
66c138148d9fa3ad9528e91d5924ee1d304257c6
Being Bayesian about Network Structure
In many domains, we are interested in analyzing the structure of the underlying distribution, e.g., whether one variable is a direct parent of the other. Bayesian model-selection attempts to find the MAP model and use its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed ordering over network variables. This allows us to compute, for a given ordering, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses an Markov Chain Monte Carlo (MCMC) method, but over orderings rather than over network structures. The space of orderings is much smaller and more regular than the space of structures, and has a smoother posterior "landscape". We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach.
7,676,706
84b0172b5dd2af16cd15d9db879bbc5dd282b23b
Using Feature Hierarchies in Bayesian Network Learning
In recent years, researchers in statistics and the UAI community have developed an impressive body of theory and algorithmic machinery for learning Bayesian networks from data. Learned Bayesian networks can be used for pattern discovery, prediction, diagnosis, and density estimation tasks. Early pioneering work in this area includes [5, 9, 10, 13]. The algorithm that has emerged as the current most popular approach is a simple greedy hill-climbing algorithm that searches the space of candidate structures, guided by a network scoring function (either Bayesian or Minimum Description Length (MDL)-based). The search begins with an initial candidate network (typically the empty network, which has no edges), and then considers making small local changes such as adding, deleting, or reversing an edge in the network.
39,114,563
a93ab3635a754fe48a273b3c000776610a6b2810
From Instances to Classes in Probabilistic Relational Models
Probabilistic graphical models, in particular Bayesian networks, are useful models for representing statistical patterns in propositional domains. Recent work develops effective techniques for learning these models directly from data. However these techniques apply only to attribute-value (i.e., flat) representations of the data. Probabilistic relational models (PRMs) allow us to represent much richer dependency structures, involving multiple entities and the relations between them; they allow the properties of an entity to depend probabilistically on properties of related entities. PRMs represent a generic dependence, which is then instantiated for specific circumstances, i.e., for a particular set of entities and relations between them. Friedman et al. showed how to learn PRMs from relational data, and presented techniques for learning both parameters and probabilistic dependency structure for the attributes in a relational model. Here we examine the benefit that class hierarchies can provide PRMs. We show how the introduction of subclasses allows us to use inheritance and specialization to refine our models. We show how to learn PRMs with class hierarchies (PRMCH) in two settings. In the first, the class hierarchy is provided, as part of the input, in the relational schema for the domain. In the second setting, in addition to learning the PRM, we must learn the class hierarchy. Finally we discuss how PRM-CHs allow us to build models that can represent models for both particular instances in our domain, and classes of objects in our domain, bridging the gap between a class-based model and an attribute-value-based model.
12,749,769
b8dbae96e0b9b310785d8f56587b4f6bc3ebd01f
First-order conditional logic for default reasoning revisited
Conditional logics play an important role in recent attempts to formulate theories of default reasoning. This paper investigates first-order conditional logic. We show that, as for first-order probabilistic logic, it is important not to confound statistical conditionals over the domain (such as “most birds fly”), and subjective conditionals over possible worlds (such as “I believe that Tweety is unlikely to fly”). We then address the issue of ascribing semantics to first-order conditional logic. As in the propositional case, there are many possible semantics. To study the problem in a coherent way, we use plausibility structures. These provide us with a general framework in which many of the standard approaches can be embedded. We show that while these standard approaches are all the same at the propositional level, they are significantly different in the context of a first-order language. Furthermore, we show that plausibilities provide the most natural extension of conditional logic to the first-order case:we provide a sound and complete axiomatization that contains only the KLM properties and standard axioms of first-order modal logic. We show that most of the other approaches have additional properties, which result in an inappropriate treatment of an infinitary version of the lottery paradox.
12,937,563
c57b1ca011205c5f22a7ff8e57d373e107891fa2
Bayesian Fault Detection and Diagnosis in Dynamic Systems
This paper addresses the problem of tracking and diagnosing complex systems with mixtures of discrete and continuous variables. This problem is a difficult one, particularly when the system dynamics are nondeterministic, not all aspects of the system are directly observed, and the sensors are subject to noise. In this paper, we propose a new approach to this task, based on the framework of hybrid dynamic Bayesian networks (DBN). These models contain both continuous variables representing the state of the system and discrete variables representing discrete changes such as failures; they can model a variety of faults, including burst faults, measurement errors, and gradual drifts. We present a novel algorithm for tracking in hybrid DBNs, that deals with the challenges posed by this difficult problem. We demonstrate how the resulting algorithm can be used to detect faults in a complex system.
85,415
f71b772930bce36b72db9ad9d61a3bf7ff36344f
Semantics and Inference for Recursive Probability Models
In recent years, there have been several proposals that extend the expressive power of Bayesian networks with that of rela- tional models. These languages open the possibility for the specification of recursive probability models, where a vari- able might depend on a potentially infinite (but finitely de- scribable) set of variables. These models are very natural in a variety of applications, e.g., in temporal, genetic, or language models. In this paper, we provide a structured representa- tion language that allows us to specify such models, a clean measure-theoretic semantics for this language, and a proba- bilistic inference algorithm that exploits the structure of the language for efficient query-answering.
7,482
fb0d2f0f9d7f783feb6381f2a750e2a14060619c
Utilities as Random Variables: Density Estimation and Structure Discovery
Decision theory does not traditionally include uncertainty over utility functions. We argue that the a person's utility value for a given outcome can be treated as we treat other domain attributes: as a random variable with a density function over its possible values. We show that we can apply statistical density estimation techniques to learn such a density function from a database of partially elicited utility functions. In particular, we define a Bayesian learning framework for this problem, assuming the distribution over utilities is a mixture of Gaussians, where the mixture components represent statistically coherent subpopulations. We can also extend our techniques to the problem of discovering generalized additivity structure in the utility functions in the population. We define a Bayesian model selection criterion for utility function structure and a search procedure over structures. The factorization of the utilities in the learned model, and the generalization obtained from density estimation, allows us to provide robust estimates of utilities using a significantly smaller number of utility elicitation questions. We experiment with our technique on synthetic utility data and on a real database of utility functions in the domain of prenatal diagnosis.
207,511
2d005ede56be0bd14ef1a8606b105bfcb33d20eb
Computing Factored Value Functions for Policies in Structured MDPs
Many large Markov decision processes (MDPs) can be represented compactly using a structured representation such as a dynamic Bayesian network. Unfortunately, the compact representation does not help standard MDP algorithms, because the value function for the MDP does not retain the structure of the process description. We argue that in many such MDPs, structure is approximately retained. That is, the value functions are nearly additive: closely approximated by a linear function over factors associated with small subsets of problem features. Based on this idea, we present a convergent, approximate value determination algorithm for structured MDPs. The algorithm maintains an additive value function, alternating dynamic programming steps with steps that project the result back into the restricted space of additive functions. We show that both the dynamic programming and the projection steps can be computed efficiently, despite the fact that the number of states is exponential in the number of state variables.
14,033,901
63e664998d8947e34a4cfc401562a172458a7053
A General Algorithm for Approximate Inference and Its Application to Hybrid Bayes Nets
The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials - distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation.
7,824,624
8935f2be6bf8391bb4ececf564f565bb14401a4d
P-CLASSIC: A traCtabk escription logic
Knowledge representation languages invariably reflect a trade-off between expressivity and tractability. Evidence suggests that the compromise chosen by description logits is a particularly successful one. However, description logic (as for all variants of first-order logic) is severely limited in its ability to express uncertainty. In this paper, we present P-CLASSIC, a probabilistic version of the description logic CLASSIC. In addition to terminological knowledge, the language utilizes Bayesian networks to express uncertainty about the basic properties of an individual, the number of fillers for its roles, and the properties of these fillers. We provide a semantics for P-CLASSIC and an effective inference procedure for probabilistic subsumption: computing the probability that a random individual in class C is also in class D. The effectiveness of the algorithm relies on independence assumptions and on our ability to execute lifted inference: reasoning about similar individuals as a group rather than as separate ground terms. We show that the complexity of the inference algorithm is the best that can be hoped for in a language that combines description logic with Bayesian networks. In particular, if we restrict to Bayesian networks that support polynomial time inference, the complexity of our inference procedure is also polynomial time.
16,213,335
8bdbd6b5d35285b92f816990963e20e99888f661
Probabilistic reasoning for complex systems
Reasoning under uncertainty is a central issue in artificial intelligence. Real-world agents must deal with noisy sensor information, non-deterministic effects of actions, and unpredictable exogenous events. Probabilistic reasoning methods, and Bayesian networks (BNs) in particular, have emerged as an effective and principled method for reasoning under uncertainty. BNs exploit conditional independence relationships to create natural and compact domain models, thereby supporting useful reasoning patterns, and providing effective probabilistic inference and learning algorithms. However, BNs are inherently limited by their attribute-based nature, making it difficult to apply them to large, complex domains. This thesis addresses the issue of representing and reasoning about probabilistic models of complex systems. We believe that the key to reasoning effectively about complex systems is to provide a language that supports the expression of system structure. We present a powerful object-based representation language, that integrates logical and probabilistic representations. Our language provides the ability to create structured, modular probabilistic models. The language maintains the key advantages of BNs, exploiting conditional independence relationships. In addition, it is capable of representing other aspects of system structure not represented in BNs. In particular, it supports the decomposition of complex systems into weakly interacting subsystems, and the reuse of models for many different components of a system. Another key benefit of our language is that it is very flexible. The same probabilistic representations can be applied in many different situations, with very different configurations. In fact, our language can even represent uncertainty over the system configuration itself, and integrate that uncertainty directly with uncertainty over the basic properties of objects in the system. Our framework also supports the representation of powerful recursive probability models. We present inference algorithms for our language that exploit the structure that can be expressed in it—not only the conditional independence structure normally exploited by BN algorithms, but also encapsulation, reuse of computation and symmetry resulting from the object-based representation. We describe an implemented system that supports representation and reasoning with models in our language, and provide experimental results demonstrating the advantages of exploiting structure in inference.
53,752,318
a5da32530c4649527b82c17b5c39e46f6dee374c
Probabilistic Relational Models
Probabilistic models provide a sound and coherent foundation for dealing with the noise and uncertainty encountered in most real-world domains. Bayesian networks are a language for representing complex probabilistic models in a compact and natural way. A Bayesian network can be used to reason about any attribute in the domain, given any set of observations. It can thus be used for a variety tasks, including prediction, explanation, and decision making. The probabilistic semantics also gives a strong foundation for the task of learning models from data. Techniques currently exist for learning both the structure and the parameters, for dealing with missing data and hidden variables, and for discovering causal structure. One of the main limitations of Bayesian networks is that they represent the world in terms of a fixed set of "attributes". Like propositional logic, they are incapable of reasoning explicitly about entities, and thus cannot represent models over domains where the set of entities and the relations between them are not fixed in advance. As a consequence, Bayesian networks are limited in their ability to model large and complex domains. Probabilistic relational models are a language for describing probabilistic models based on the significantly more expressive basis of relational logic. They allow the domain to be represented in terms of entities, their properties, and the relations between them. These models represent the uncertainty over the properties of an entity, representing its probabilistic dependence both on other properties of that entity and on properties of related entities. They can even represent uncertainty over the relational structure itself. Some of the techniques for Bayesian network learning can be generalized to this setting, but the learning problem is far from solved. Probabilistic relational models provide a new framework, and new challenges, for the endeavor of learning relational models for real-world domains.
26,932,183
aa12e1ab58044c53ccc6d2057f480098ee6ebe1b
Reinforcement Learning Using Approximate Belief States
The problem of developing good policies for partially observable Markov decision problems (POMDPs) remains one of the most challenging areas of research in stochastic planning. One line of research in this area involves the use of reinforcement learning with belief states, probability distributions over the underlying model states. This is a promising method for small problems, but its application is limited by the intractability of computing or representing a full belief state for large problems. Recent work shows that, in many settings, we can maintain an approximate belief state, which is fairly close to the true belief state. In particular, great success has been shown with approximate belief states that marginalize out correlations between state variables. In this paper, we investigate two methods of full belief state reinforcement learning and one novel method for reinforcement learning using factored approximate belief states. We compare the performance of these algorithms on several well-known problem from the literature. Our results demonstrate the importance of approximate belief state representations for large problems.
11,929,008
ab55dbfdf014274dd86848d5d0dd7b46d8c14884
Learning the Structure of Utility Functions
Utility functions are defined over a space which is exponential in the number of variables on which the utility depends. More compact representations of the utility are possible if we make certain assumptions about additive independence among the variables. These assumptions allow the function to be decomposed into smaller components, thus reducing the number of parameters needed to specify it completely. Decomposable utility functions support more efficient inference and are easier to elicit from people. However, it can be difficult to know which decomposition is appropriate in a given setting. We hypothesize that there is some commonality to the utilities exhibited by a population of users; more precisely, we assume that the population is divided (in an unknown way) into subpopulations, each of which is statistically coherent. We can view the problem of discovering the structure of this distribution over utilities in a population as a statistical learning task. We show how we can apply Bayesian learning techniques to learn the distribution over factored utility functions from a set of fully specified utility functions elicited from a population of users. Our approach can be used for a wide range of independence types, including conditional additive independenceandgeneralized additive independence . We show how to choose a utility decomposition appropriate to a large subpopulation by performing statistical model selection, using an approximation to the Bayesian score. The factorization of the utilities in the learned model facilitates utility elicitation by allowing fully specified utility functions to be assessed using a significantly smaller number of questions. The generalization obtained from learning a model for a population of similar people allows smoother estimates of the utility function, thereby reducing the noise unavoidable in utility assessment.
12,571,841
afe5f47ad0e5b9ce71f816d981eb4b6a13ed189f
Discovering the Hidden Structure of Complex Dynamic Systems
Dynamic Bayesian networks provide a compact and natural representation for complex dynamic systems. However, in many cases, there is no expert available from whom a model can be elicited. Learning provides an alternative approach for constructing models of dynamic systems. In this paper, we address some of the crucial computational aspects of learning the structure of dynamic systems, particularly those where some relevant variables are partially observed or even entirely unknown. Our approach is based on the Structural Expectation Maximization (SEM) algorithm. The main computational cost of the SEM algorithm is the gathering of expected sufficient statistics. We propose a novel approximation scheme that allows these sufficient statistics to be computed efficiently. We also investigate the fundamental problem of discovering the existence of hidden variables without exhaustive and expensive search. Our approach is based on the observation that, in dynamic systems, ignoring a hidden variable typically results in a violation of the Markov property. Thus, our algorithm searches for such violations in the data, and introduces hidden variables to explain them. We provide empirical results showing that the algorithm is able to learn the dynamics of complex systems in a computationally tractable way.
2,027,953
be7757499295055f1c4f4c04dd1b38a1dc02c784
Sensors & Symbols: An Integrated Framework
Abstract : The goal of this effort was to provide a unified probabilistic framework that integrates symbolic and sensory reasoning. Such a framework would allow sensor data to be analyzed in terms of high-level symbolic models. It will also allow the results of high-level analysis to guide the low-level sensor interpretation task and to help in resolving ambiguities in the sensor data. Our approach was based on the framework of probabilistic graphical models, which allows us to build systems that learn and reason with complex models, encompassing both low-level continuous sensor data and high-level symbolic concepts. Over the five years of the project, we explored two main thrusts: Inference and learning in hybrid and temporal Bayesian networks Mapping and modeling of 3D physical environments. Our progress on each of these two directions is detailed in the attached report.
60,963,703
e3678df4a8c183fc50d59e6277284078785600e6
Efficient Reinforcement Learning in Factored MDPs
We present a provably efficient and near-optimal algorithm for reinforcement learning in Markov decision processes (MDPs) whose transition model can be factored as a dynamic Bayesian network (DBN). Our algorithm generalizes the recent E3 algorithm of Kearns and Singh, and assumes that we are given both an algorithm for approximate planning, and the graphical structure (but not the parameters) of the DBN. Unlike the original E algorithm, our new algorithm exploits the DBN structure to achieve a running time that scales polynomially in the number of parameters of the DBN, which may be exponentially smaller than the number of global states.
7,620,621
e8bd3a622b3f139a6d56552a6d4b150203d755c0
Exploiting the Architecture of Dynamic Systems
Consider the problem of monitoring the state of a complex dynamic system, and predicting its future evolution. Exact algorithms for this task typically maintain a belief state, or distribution over the states at some point in time. Unfortunately, these algorithms fail when applied to complex processes such as those represented as dynamic Bayesian networks (DBNs), as the representation of the belief state grows exponentially with the size of the process. In (Boyen & Koller 1998), we recently proposed an efficient approximate tracking algorithm that maintains an approximate belief state that has a compact representation as a set of independent factors. Its performance depends on the error introduced by approximating a belief state of this process by a factored one. We informally argued that this error is low if the interaction between variables in the processes is "weak". In this paper, we give formal information-theoretic definitions for notions such as weak interaction and sparse interaction of processes. We use these notions to analyze the conditions under which the error induced by this type of approximation is small. We demonstrate several cases where our results formally support intuitions about strength of interaction.
1,709,942
ec822aa9e757cd5be1d0bca95283de15babf5406
Policy Search via Density Estimation
We propose a new approach to the problem of searching a space of stochastic controllers for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP). Following several other authors, our approach is based on searching in parameterized families of policies (for example, via gradient descent) to optimize solution quality. However, rather than trying to estimate the values and derivatives of a policy directly, we do so indirectly using estimates for the probability densities that the policy induces on states at the different points in time. This enables our algorithms to exploit the many techniques for efficient and robust approximate density propagation in stochastic systems. We show how our techniques can be applied both to deterministic propagation schemes (where the MDP's dynamics are given explicitly in compact form,) and to stochastic propagation schemes (where we have access only to a generative model, or simulator, of the MDP). We present empirical results for both of these variants on complex problems.
3,110,662
ee566259db1fa8de24c47da0f5bc24600015dd51
SPOOK: A system for probabilistic object-oriented knowledge representation
In previous work, we pointed out the limitations of standard Bayesian networks as a modeling framework for large, complex domains. We proposed a new, richly structured modeling language, Object-oriented Bayesian Networks, that we argued would be able to deal with such domains. However, it turns out that OOBNs are not expressive enough to model many interesting aspects of complex domains: the existence of specific named objects, arbitrary relations between objects, and uncertainty over domain structure. These aspects are crucial in real-world domains such as battlefield awareness. In this paper, we present SPOOK, an implemented system that addresses these limitations. SPOOK implements a more expressive language that allows it to represent the battlespace domain naturally and compactly. We present a new inference algorithm that utilizes the model structure in a fundamental way, and show empirically that it achieves orders of magnitude speedup over existing approaches.
10,666,630
1835df6aa11b5e8ad2e33a92b86f655c5d3903a2
Structured Representation of Complex Stochastic Systems
This paper considers the problem of representing complex systems that evolve stochastically over time. Dynamic Bayesian networks provide a compact representation for stochastic processes. Unfortunately, they are often unwieldy since they cannot explicitly model the complex organizational structure of many real life systems: the fact that processes are typically composed of several interacting subprocesses, each of which can, in tum, be further decomposed. We propose a hierarchically structured representation language which extends both dynamic Bayesian networks and the object-oriented Bayesian network framework of [9], and show that our language allows us to describe such systems in a natural and modular way. Our language supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. For example, it allows us to represent systems where some processes evolve at a different rate than others, or systems where the processes interact only intermittently. We provide a simple inference mechanism for our representation via translation to Bayesian networks, and suggest ways in which the inference algorithm can exploit the additional structure encoded in our representation.
726,622
521cc9e51b6fe7576c9b578e73df8d21f5539d3c
Structured Probabilistic Models: Bayesian Networks and Beyond
For many years, probabilistic models were largely neglected within the AI community. Now they play a fundamental role in many areas in AI, including diagnosis, planning, and learning. One of the crucial reasons for this transition is the use of structured model-based representations such as Bayesian networks. Building on this idea, we can extend the success of probabilistic modeling to much more complex domains, ones involving many components that interact and evolve ove time. These domains are significantly beyond the scope of traditional Bayesian networks. I describe a broad class of structured probabilistic representations that extend Bayesian networks to deal with these new challenges. I argue that these representations can form the basis for agents that reason and act in complex uncertain environments.
42,416,244
8bb69717dc3ceb1b6b6c9cd16ce4f1b20e3760f9
Title: Hierarchically Classifying Documents Using Very Few Words Authors: Hierarchically Classifying Documents Using Very Few Words
The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. One can use existing classi ers by ignoring the hierarchical structure, treating the topics as separate classes. Unfortunately, in the context of text categorization, we are faced with a large number of classes and a huge number of relevant features needed to distinguish between them. Consequently, we are restricted to using only very simple classi ers, both because of computational cost and the tendency of complex models to over t. We propose an approach that utilizes the hierarchical topic structure to decompose the classi cation task into a set of simpler problems, one at each node in the classi cation tree. As we show, each of these smaller problems can be solved accurately by focusing only on a very small set of features, those relevant to the task at hand. This set of relevant features varies widely throughout the hierarchy, so that, while the overall relevant feature set may be large, each classi er only examines a small subset. The use of reduced feature sets allows us to utilize more complex (probabilistic) models, without encountering the computational and robustness di culties described above.
16,682,557
92aea50331c19fe9716d3a9a02e26704afe24d88
Tractable Inference for Complex Stochastic Processes
The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state--a probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.
5,556,701
9db120481e4b69de756f067ec50eac0731dae864
Using machine learning to improve information access
The explosion of on-line information has given rise to many query-based search engines (such as Alta Vista) and manually constructed topic hierarchies (such as Yahoo!). But with the current growth rate in the amount of information, query results grow incomprehensibly large and manual classification in topic hierarchies creates an immense information bottleneck. Therefore, these tools are rapidly becoming inadequate for addressing users' information needs. In this dissertation, we address these problems with a system for topical information space navigation that combines the query-based and taxonomic approaches. Our system, named SONIA (Service for Organizing Networked Information Autonomously), is implemented as part of the Stanford Digital Libraries testbed. It enables the creation of dynamic hierarchical document categorizations based on the full-text of articles. Using probability theory as a formal foundation, we develop several Machine Learning methods to allow document collections to be automatically organized at a topical level. First, to generate such topical hierarchies, we employ a novel probabilistic clustering scheme that outperforms traditional methods used in both Information Retrieval and Probabilistic Reasoning. Furthermore, we develop methods for classifying new articles into such automatically generated, or existing manually generated, hierarchies. In contrast to standard classification approaches which do not make use of the taxonomic relations in a topic hierarchy, our method explicitly uses the existing hierarchical relationships between topics, leading to improvements in classification accuracy. Much of this improvement is derived from the fact that the classification decisions in such a hierarchy can be made by considering only the presence (or absence) of a small number of features (words) in each document. The choice of relevant words is made using a novel information theoretic algorithm for feature selection. Many of the components developed as part of SONIA are also general enough that they have been successfully applied to data mining problems in different domains than text. The integration of hierarchical clustering and classification will allow large amounts of information to be organized and presented to users in a individualized and comprehensible way. By alleviating the information bottleneck, we hope to help users with the problems of information access on the Internet.
6,384,627
b97ec7b4f8b3cd921bd44b962be00dbb199499be
Probabilistic Frame-Based Systems
Two of the most important threads of work in knowledge representation today are frame-based representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of their inability to deal with uncertainty and noise. BNs provide an intuitive and coherent probabilistic representation of our uncertainty, but are very limited in their ability to handle complex structured domains. In this paper, we provide a language that cleanly integrates these approaches, preserving the advantages of both. Our approach allows us to provide natural and compact definitions of probability models for a class, in a way that is local to the class frame. These models can be instantiated for any set of interconnected instances, resulting in a coherent probability distribution over the instance properties. Our language also allows us to represent important types of uncertainty that cannot be accomodated within the framework of traditional BNs: uncertainty over the set of entities present in our model, and uncertainty about the relationships between these entities. We provide an inference algorithm for our language via a reduction to inference in standard Bayesian networks. We describe an implemented system that allows most of the main frame systems in existence today to annotate their knowledge bases with probabilistic information, and to use that information in answering probabilistic queries.
15,856,377
b9e7c948ec2d8ff655f9f51b4ddeb8b221eef8df
Approximate Learning of Dynamic Models
Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the many inference phases requires a traversal over an entire long data sequence; furthermore, the data structures manipulated are exponentially large, making this process computationally expensive. In [2], we describe an approximate inference algorithm for monitoring stochastic processes, and prove bounds on its approximation error. In this paper, we apply this algorithm as an approximate forward propagation step in an EM algorithm for learning temporal Bayesian networks. We provide a related approximation for the backward step, and prove error bounds for the combined algorithm. We show empirically that, for a real-life domain, EM using our inference algorithm is much faster than EM using exact inference, with almost no degradation in quality of the learned model. We extend our analysis to the online learning task, showing a bound on the error resulting from restricting attention to a small window of observations. We present an online EM learning algorithm for dynamic systems, and show that it learns much faster than standard offline EM.
5,271,129
cb453d990e2ca57395397b49cb66f6e757e8156d
Efficient inference in Bayesian networks
null
59,748,962
f13ba8f58399b91fd0ecd86f68089d5a305886df
Using Learning for Approximation in Stochastic Processes
To monitor or control a stochastic dynamic system, we need to reason about its current state. Exact inference for this task requires that we maintain a complete joint probability distribution over the possible states, an impossible requirement for most processes. Stochastic simulation algorithms provide an alternative solution by approximating the distribution at time t via a (relatively small) set of samples. The time t samples are used as the basis for generating the samples at time t + 1. However, since only existing samples are used as the basis for the next sampling phase, new parts of the space are never explored. We propose an approach whereby we try to generalize from the time t samples to unsampled regions of the state space. Thus, these samples are used as data for learning a distribution over the states at time t, which is then used to generate the time t+1 samples. We examine different representations for a distribution, including density trees, Bayesian networks, and tree-structured Bayesian networks, and evaluate their appropriateness to the task. The machine learning perspective allows us to examine issues such as the tradeoffs of using more complex models, and to utilize important techniques such as regularization and priors. We validate the performance of our algorithm on both artificial and real domains, and show significant improvement in accuracy over the existing approach.
9,659,560
23354987095a8a9a283ce4c9a690522d6b11e2dd
Hierarchically Classifying Documents Using Very Few Words
The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. One can use existing classifiers by ignoring the hierarchical structure, treating the topics as separate classes. Unfortunately, in the context of text categorization, we are faced with a large number of classes and a huge number of relevant features needed to distinguish between them. Consequently, we are restricted to using only very simple classifiers, both because of computational cost and the tendency of complex models to overfit. We propose an approach that utilizes the hierarchical topic structure to decompose the classification task into a set of simpler problems, one at each node in the classification tree. As we show, each of these smaller problems can be solved accurately by focusing only on a very small set of features, those relevant to the task at hand. This set of relevant features varies widely throughout the hierarchy, so that, while the overall relevant feature set may be large, each classifier only examines a small subset. The use of reduced feature sets allows us to utilize more complex (probabilistic) models, without encountering the computational and robustness difficulties described above.
2,112,467
2b310e1250580a7403252807296324de0c899262
Using Probabilistic Information in Data Integration
The goal of a mediator system is to provide users a uniform interface to the multitude of information sources. To translate user queries, given in a mediated schema, to queries on the data sources, mediators rely on explicit mappings between the contents of the data sources and the meanings of the relations in the mediated schema. Thus far, contents of data sources were described qualitatively. In this paper we describe the use of quantitative information in the form of probabilistic knowledge in mediator systems. We consider several kinds of probabilistic information: information about overlap between collections in the mediated schema, coverage of the information sources, and degrees of overlap between information sources. We address the problem of ordering accesses to multiple information sources, in order to maximize the likelihood of obtaining answers as early as possible. We describe a declarative formalism for specifying these kinds of probabilistic information, and we propose algorithms for ordering the information sources. Finally, we discuss a preliminary experimental evaluation of these algorithms on the domain of bibliographic sources available on the WWW.
15,927,943
2b740fd570704b219a0890d6ba69b81be6292921
Title : Hierarchically classifying documents using very few wordsAuthors :
The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. One can use existing classiiers by ignoring the hierarchical structure, treating the topics as separate classes. Unfortunately , in the context of text categorization, we are faced with a large number of classes and a huge number of relevant features needed to distinguish between them. Consequently, we are restricted to using only very simple classiiers, both because of computational cost and the tendency of complex models to overrt. We propose an approach that utilizes the hierarchical topic structure to decompose the classiication task into a set of simpler problems, one at each node in the classiication tree. As we show, each of these smaller problems can be solved accurately by focusing only on a very small set of features, those relevant to the task at hand. This set of relevant features varies widely throughout the hierarchy, so that, while the overall relevant feature set may be large, each classiier only examines a small subset. The use of reduced feature sets allows us to utilize more complex (proba-bilistic) models, without encountering the computational and robustness diiculties described above. Multiple submission statement: This paper is currently not under review for another conference or journal, nor will it be submitted elsewhere during ML's review period. Abstract. The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. One can use existing classiiers by ignoring the hierarchical structure, treating the topics as separate classes. Unfortunately, in the context of text cat-egorization, we are faced with a large number of classes and a huge number of relevant features needed to distinguish between them. Consequently, we are restricted to using only very simple classiiers, both because of computational cost and the tendency of complex models to overrt. We propose an approach that utilizes the hierarchical topic structure to decompose the classiication task into a set of simpler problems, one at each node in the classiication tree. As we show, each of these smaller problems can be solved accurately by focusing only on a very small set of features, those relevant to the task at hand. This set of relevant features varies widely throughout the hierarchy, so that, while the overall relevant feature set may be large, each classiier only examines a …
18,245,192
3b54cb0af7601b84bf5db370c5379e164cdbb7eb
P-CLASSIC: A Tractable Probablistic Description Logic
Knowledge representation languages invariably reflect a trade-off between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logiC (as for all vanants of first-order logic) is severely limited in its ability to express uncertainty. In this paper, we present P-CLASSIC, a probabilistic version of the description logiC CLASSIC. In addition to teoninological knowledge, the language utilizes Bayesian networks to express uncertainty about the basic properties of an individual, the number of fillers for its roles, and the properties of these fillers. We provide a semantics for P-CLASSIC and an effective inference procedure for probabilistic subsumption: computing the probability that a random individual in class C is also in class D. The effectiveness of the algorithm relies on independence assumptions and on our ability to execute lifted inference: reasoning about similar individuals as a group rather than as separate ground teons. We show that the complexity of the inference algorithm is the best that can be hoped for in a language that combines description logic with Bayesian networks. In particular, if we restrict to Bayesian networks that support polynomial time inference, the complexity of our inference procedure is also polynomial time.
11,227,424
419438bc4f6652784f42cf3e62c975a5c89b817e
Object-Oriented Bayesian Networks
Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of medium-scale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an object-oriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of inter-related objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding part-of hierarchies, Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language has clear declarative semantics: an OOBN can be interpreted as a stochastic functional program, so that it uniquely specifies a probabilistic model. We provide an inference algorithm for OOBNs, and show that much of the structural information encoded by an OOBN--particularly the encapsulation of variables within an object and the reuse of model fragments in different contexts---can also be used to speed up the inference process.
1,939,888
92453444d69d19b276c0d507a68fdd400d16e41e
Representations and Solutions for Game-Theoretic Problems
null
18,428,433
97016181c645b638859fb3cc7003e3c2f6cbde72
Update Rules for Parameter Estimation in Bayesian Networks
This paper re-examines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in on-line learning [Kivinen & Warmuth, 1994]. We provide a unified framework for parameter estimation that encompasses both on-line learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process. In the batch case, our framework encompasses both the gradient projection algorithm and the EM algorithm for Bayesian networks. The framework also leads to new on-line and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parameters than does standard EM.
8,047,085
a3f7af74a88f797294ca8e626a85867d43b04b1b
(De)randomized Construction of Small Sample Spaces in NC
Koller and Megiddo introduced the paradigm of constructing compact distributions that satisfy a given set of constraints and showed how it can be used to efficiently derandomize certain types of algorithms. In this paper, we significantly extend their results in two ways. First, we show how their approach can be applied to deal with more generalexpectation constraints. More importantly, we provide the firstparallel(NC) algorithm for constructing a compact distribution that satisfies the constraints up to a smallrelativeerror. This algorithm deals with constraints over any event that can be verified by finite automata, including allindependence constraintsas well as constraints over events relating to the parity or sum of a certain set of variables. Our construction relies on a new and independently interesting parallel algorithm for converting a solution to a linear system into an almost basic approximate solution to the same system. We use these techniques in the first NC derandomization of an algorithm for constructing large independent sets ind-uniform hypergraphs forarbitrary d. We also show how the linear programming perspective suggests new proof techniques which might be useful in general probabilistic analysis.
2,722,181
a8e771bf1ba3cb9adc75c68b3651cb0946e7ebdf
Knowledge Representation for an Uncertain World.
Abstract : Any application where an intelligent agent interacts with the real world must deal with the problem of uncertainty. Bayesian belief networks have become dominant in addressing this issue. This is a framework based on principled probabilistic semantics, which achieves effective knowledge representation and inference capabilities by utilizing the locality structure in the domain: typically, only very few aspects of the situation directly affect each other. Despite their success, belief networks are inadequate as a knowledge representation language for large, complex domains: Their attribute-based nature does not allow us to express general rules that hold in many different circumstances. This prevents knowledge from being shared among applications; the initial knowledge acquisition cost has to be paid for each new domain. It also inhibits the construction of large complex networks. We deal with this issue by presenting a rich knowledge-representation language from which belief networks can be constructed to suit specific circumstances, algorithms for learning the network parameters from data, fast approximate inference algorithms designed to deal with the large networks that result. We show how these techniques can be applied in domains involving continuous variables, in situations where the world changes over time, and in the context of planing under uncertainty.
60,009,191
b4d885cf904f5d061e04c1bdf585bda3de1c956f
Effective Bayesian Inference for Stochastic Programs
In this paper, we propose a stochastic version of a general purpose functional programming language as a method of modeling stochastic processes. The language contains random choices, conditional statements, structured values, defined functions, and recursion. By imagining an experiment in which the program is "run" and the random choices made by sampling, we can interpret a program in this language as encoding a probability distribution over a (potentially infinite) set of objects. We provide an exact algorithm for computing conditional probabilities of the form Pr(P(x) | Q(x)) where x is chosen randomly from this distribution. This algorithm terminates precisely when sampling x and computing P(x) and Q(x) terminates in all possible stochastic executions (under lazy evaluation semantics, in which only values needed to compute the output of the program are evaluated). We demonstrate the applicability of the language and the efficiency of the inference algorithm by encoding both Bayesian networks and stochastic context-free grammars in our language, and showing that our algorithm derives efficient inference algorithms for both. Our language easily supports interesting and useful extensions to these formalisms (e.g., recursive Bayesian networks), to which our inference algorithm will automatically apply.
10,258,544
da94000d2def6235ab60b88a6da31876bc6c5fc3
Learning Probabilities for Noisy First-Order Rules
First-order logic is the traditional basis for knowledge representation languages. However, its applicability to many real-world tasks is limited by its inability to represent uncertainty. Bayesian belief networks, on the other hand, are inadequate for complex KR tasks due to the limited expressivity of the underlying (prepositional) language. The need to incorporate uncertainty into an expressive language has led to a resurgence of work on first-order probabilistic Logic. This paper addresses one of the main objections to the incorporation of probabilities into the language: "Where do the numbers come from?" We present an approach that takes a knowledge base in an expressive rule-based first-order language, and leams the probabilistic parameters associated with those rules from data cases. Our approach, which is based on algorithms for learning in traditional Bayesian networks, can handle data cases where many of the relevant aspects of the situation are unobserved. It is also capable of utilizing a rich variety of data cases, including instances with varying causal structure, and even involving a varying number of individuals. These features allow the approach to be used for a wide range of tasks, such as learning genetic propagation models or learning first-order STRIPS planning operators with uncertain effects.
13,072,151
f2ef01101e3d8a1dba9a0b641b4ff5341b877899
Nonuniform Dynamic Discretization in Hybrid Networks
We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significantly improved by adjusting discretization with evidence. We construct an erative anytime algorithm that gradually improves the quality of the discretization and the accuracy of the answer on a query. We provide empirical evidence that the algorithm converges.
7,697,546
0885dd83b42d69caf82b7267b5bca7ec49eaa4a0
First-Order Conditional Logic Revisited
Conditional Zogics play an important role in recent attempts to investigate default reasoning. This paper investigates first-order conditional logic. We show that, as for first-order probabilistic logic, it is important not to confound statistical conditionals over the domain (such as "most birds fly"), and subjective conditionals over possible worlds (such as "I believe that lweety is unlikely to fly"). We then address the issue of ascribing semantics to first-order conditional logic. As in the propositional case, there are many possible semantics. To study the problem in a coherent way, we use plausibility structures. These provide us with a general framework in which many of the standard approaches can be embedded. We show that while these standard approaches are all the same at the propositional level, they are significantly different in the context of a first-order language. We show that plausibilities provide the most natural extension of conditional logic to the first-order case: We provide a sound and complete axiomatization that contains only the KLM properties and standard axioms of first-order modal logic. We show that most of the other approaches have additional properties, which result in an inappropriate treatment of an infinitary version of the lottery paradox.
6,466,678
0b974615bb275c6bfd281890270120ae682f85f7
Finding mixed strategies with small supports in extensive form games
The complexity of algorithms that compute strategies or operate on them typically depends on the representation length of the strategies involved. One measure for thesize of a mixed strategy is the number of strategies in itssupport — the set of pure strategies to which it gives positive probability. This paper investigates the existence of “small” mixed strategies in extensive form games, and how such strategies can be used to create more efficient algorithms. The basic idea is that, in an extensive form game, a mixed strategy induces a small set ofrealization weights that completely describe its observable behavior. This fact can be used to show that for any mixed strategy μ, there exists a realization-equivalent mixed strategy µ′ whose size is at most the size of the game tree. For a player with imperfect recall, the problem of finding such a strategy µ′ (given the realization weights) is NP-hard. On the other hand, if μ is a behavior strategy, µ′ can be constructed from μ in time polynomial in the size of the game tree. In either case, we can use the fact that mixed strategies need never be too large for constructing efficient algorithms that search for equilibria. In particular, we construct the first exponential-time algorithm for finding all equilibria of an arbitrary two-person game in extensive form.
9,115,803
14ed68bccedb4941631693dac0299031f898d75e
Efficient Computation of Equilibria for Extensive Two-Person Games
Abstract The Nash equilibria of a two-person, non-zero-sum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, the game must first be converted to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. If the game has perfect recall, a linear-sized strategic description is the sequence form. For the resulting small LCP, we show that an equilibrium is found efficiently by Lemke's algorithm, a generalization of the Lemke–Howson method. Journal of Economic Literature Classification Number: C72.
11,543,818
19c9f3ae3a4e85a10f0432fe2294151915bded36
Asymptotic Conditional Probabilities: The Non-Unary Case
Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order sentences. Given first-order sentences φ and θ, we consider the structures with domain {1, . . . , N} that satisfy θ, and compute the fraction of them in which φ is true. We then consider what happens to this fraction as N gets large. This extends the work on 0-1 laws that considers the limiting probability of first-order sentences, by considering asymptotic conditional probabilities. As shown by Liogon’kĭı [Lio69], if there is a non-unary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We extend this result to show that asymptotic conditional probabilities do not always exist for any reasonable notion of limit. Liogon’kĭı also showed that the problem of deciding whether the limit exists is undecidable. We analyze the complexity of three problems with respect to this limit: deciding whether it is well defined, whether it exists, and whether it lies in some nontrivial interval. Matching upper and lower bounds are given for all three problems, showing them to be highly undecidable. ∗Some of this research was done while all three authors were at IBM Almaden Research Center. During this work, the first author was at Stanford University, and was supported by an IBM Graduate Fellowship. This research was sponsored in part by the Air Force Office of Scientific Research (AFSC), under Contract F4962091-C-0080. The United States Government is authorized to reproduce and distribute reprints for governmental purposes. Some of this research appeared in preliminary form in a paper entitled “Asymptotic conditional probabilities for first-order logic”, which appears in Proceedings 24th ACM Symp. on Theory of Computing, 1992, pages 294–305. This paper appears in the Journal of Symbolic Logic.
13,787,487
2524706673c07b98a88603ada25c7444502d8205
Asymptotic Conditional Probabilities: The Unary Case
Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order sentences. Given first-order sentences $\phi$ and $\theta$, we consider the structures with domain $\{1,\ldots, N\}$ that satisfy $\theta$, and compute the fraction of them in which $\phi$ is true. We then consider what happens to this fraction as $N$ gets large. This extends the work on 0-1 laws that considers the limiting probability of first-order sentences, by considering asymptotic conditional probabilities. As shown by Liogon'kii [Math. Notes Acad. USSR, 6 (1969), pp.\ 856--861] and by Grove, Halpern, and Koller [Res. Rep. RJ 9564, IBM Almaden Research Center, San Jose, CA, 1993], in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that $\theta$ can use a nonunary predicate symbol. Liogonkii\ [Math. Notes Acad. USSR, 6 (1969), pp.\ 856--861] shows that if we condition on formulas $\theta$ involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on $\phi$. We extend this result here to the case where $\theta$ involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem.
14,177,175
42c594c38cca3bed64ba07801d071b29b439c648
The role of AI in digital libraries
The World Wide Web's growing popularity is changing the nature of libraries. Digital libraries offer a huge range of multimedia information-everything from movies, speeches, images, and photos to sounds, text, and beyond. The amount of on-line material is exploding, and the infrastructure for locating and accessing material improves almost daily. With so much and such a wide variety of information available, the problem is changing from simply locating related information to locating the most relevant information efficiently and cost effectively. In building the next generation of digital libraries, artificial intelligence will play several important roles. First, the multimedia nature of digital libraries will require moving beyond simple keyword lookup of information to much more advanced document-processing capabilities in which the system analyzes the content through text analysis, image processing, and speech recognition. Second, the availability of such a huge amount of information will require advances in the infrastructure for organizing and accessing information. A promising approach to this problem is the development of information agents. These agents can provide a variety of services-such as searching, retrieving, filtering, and negotiating-that reduce the burden on the information user or provider. Researchers from several of the major digital library projects present their vision of AI's role in building digital libraries.
32,662,239
5bdd9a3317b43966f97b5c70c55c46fd19335049
Context-Specific Independence in Bayesian Networks
Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution, This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables, In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation scheme--tree-structured CPTs-- for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms, in particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on outset conditioning.
8,303,823
5ed4e1dbe10c0ac9fa00b30d1882cae1249a5a6a
Toward Optimal Feature Selection
In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features.
1,455,429
a0910e42b3bab72ad2ef73166aee0e7aae514eb6
Structured representations and intractability
null
5,153,984
b6f58567e134e11ee2fa8ae9eb088723c1b1bf03
From Statistical Knowledge Bases to Degrees of Belief
null
47,035,812
e597eb661a2401b38549b44691efda6c72e9cd1f
Irrelevance and Conditioning in First-Order Probabilistic Logic
First-order probabilistic logic is a powerful knowledge representation language. Unfortunately, deductive reasoning based on the standard semantics for this logic does not support certain desirable patterns of reasoning, such as indifference to irrelevant information or substitution of constants into universal rules. We show that both these patterns rely on a first-order version of probabilistic independence, and provide semantic conditions to capture them. The resulting insight enables us to understand the effect of conditioning on independence, and allows us to describe a procedure for determining when independencies are preserved under conditioning. We apply this procedure in the context of a sound and powerful inference algorithm for reasoning from statistical knowledge bases.
2,908,780
06c27c96b5201f6fc57c434856c1865135f4bbb5
Local Learning in Probabilistic Networks with Hidden Variables
Probabilistic networks which provide compact descriptions of complex stochastic relationships among several random variables are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be learned automatically from data using a gradient-descent mechanism similar to that used in neural networks We also extend the method to networks with intensionally represented distributions, including networks with continuous variables and dynamic probabilistic networks Because probabilistic networks provide explicit representations of causal structure human experts can easily contribute pnor knowledge to the training process, thereby significantly improving the learning rate Adaptive probabilistic networks (APNs) may soon compete directly with neural networks as models in computational neuroscience as well as in industrial and financial applications.
1,773,555
135d19fe9c3836d5ba5f6af7620e4d25f2fed710
Stochastic simulation algorithms for dynamic probabilistic networks
Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods bf choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, "evidence reversal" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called "survival of the fittest" sampling (SOF), "repopulates" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation.
421,074
18a61c55bc0ef7d658a79bb348e5eb6c58d033ce
Representation Dependence in Probabilistic Inference
Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed this as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any representation-independent probabilistic inference procedure that ignores irrelevant information is essentially entailment, in a precise sense. Moreover, we show that representation independence is incompatible with even a weak default assumption of independence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation independence and other desiderata, and provide a construction of a family of inference procedures that provides such restricted representation independence, using relative entropy.
5,150,206
1c13044ef01f6c7f27073172f1179b4dcb041fb9
An integrated stereo-based approach to automatic vehicle guidance
Proposes a new approach for vision-based longitudinal and lateral vehicle control. The novel feature of this approach is the use of binocular vision. We integrate two modules consisting of a new, domain-specific, efficient binocular stereo algorithm, and a lane marker detection algorithm, and show that the integration results in a improved performance for each of the modules. Longitudinal control is supported by detecting and measuring the distances to leading vehicles using binocular stereo. The knowledge of the camera geometry with respect to the locally planar road is used to map the images of the road plane in the two camera views into alignment. This allows us to separate image features into those lying in the road plane, e.g. lane markers, and those due to other objects which are dynamically integrated into an obstacle map. Therefore, in contrast with the previous work, we can cope with the difficulties arising from occlusion of lane markers by other vehicles. The detection and measurement of the lane markers provides us with the positional parameters and the road curvature which are needed for lateral vehicle control. Moreover, this information is also used to update the camera geometry with respect to the road, therefore allowing us to cope with the problem of vibrations and road inclination to obtain consistent results from binocular stereo.<<ETX>>
11,979,762
296f28120625ba0261847729e96b9771ba20493e
A game-theoretic classification of interactive complexity classes
Game-theoretic characterisations of complexity classes have often proved useful in understanding the power and limitations of these classes. One well-known example tells us that PSPACE can be characterized by two-person, perfect-information games in which the length of a played game is polynomial in the length of the description of the initial position [by Chandra et al., see Journal of the ACM, vol. 28, p. 114-33 (1981)]. In this paper, we investigate the connection between game theory and interactive computation. We formalize the notion of a polynomially definable game system for the language L, which, informally, consists of two arbitrarily powerful players P/sub 1/ and P/sub 2/ and a polynomial-time referee V with a common input w. Player P/sub 1/ claims that w/spl isin/L, and player P/sub 2/ claims that w/spl isin/L; the referee's job is to decide which of these two claims is true. In general, we wish to study the following question: What is the effect of varying the system's game-theoretic properties on the class of languages recognizable by polynomially definable game systems? There are many possible game-theoretic properties that we could investigate in this context. The focus of this paper is the question of what happens when one or both of the players P/sub 1/ and P/sub 2/ have imperfect information or imperfect recall. We use polynomially definable game systems to derive new characterizations of the complexity classes NEXP and coNEXP.
16,052,715
4d83720259e0d2318e0ab1717d8167fde158db7e
A Ga,me-Theoretic Classification of Interactive Complexity Classes (Est ended Abstract )
null
62,086,067
b7e0086124e5cb301861d5e424394f438e46d5b2
Constructing Flexible Dynamic Belief Networks from First-Order Probalistic Knowledge Bases
This paper investigates the power of first-order probabilistic logic (FOPL) as a representation language for complex dynamic situations. We introduce a sublanguage of FOPL and use it to provide a first-order version of dynamic belief networks. We show that this language is expressive enough to enable reasoning over time and to allow procedural representations of conditional probability tables. In particular, we define decision tree representations of conditional probability tables that can be used to decrease the size of the created belief networks. We provide an inference algorithm for our sublanguage using the paradigm of knowledge-based model construction. Given a FOPL knowledge base and a particular situation, our algorithm constructs a propositional dynamic belief network, which can be solved using standard belief network inference algorithms. In contrast to common dynamic belief networks, the structure of our networks is more flexible and better adapted to the given situation. We demonstrate the expressive power of our language and the flexibility of the resulting belief networks using a simple knowledge base modeling the propagation of infectious diseases.
8,744,900
bd556e178933bf3b56ec0929e4ef86104167bfac
Generating and Solving Imperfect Information Games
Work on game playing in AI has typically ignored games of imperfect information such as poker. In this paper we present a framework for dealing with such games. We point out several important issues that arise only in the context of imperfect information games particularly the insufficiency of a simple game tree model to represent the players information state and the need for randomization in the players optimal strategies. We describe Gala an implemented system that provides the user with a very natural and expressive language for describing games. From a game description Gala creates an augmented game tree with information sets which can be used by various algorithms in order to find optimal strategies for that game. In particular Gala implements the first practical algorithm for finding optimal randomized strategies in two player imperfect information competitive games [Koller et al 1994]. The running time of this algorithm is palinomial in the size of the game tree whereas previous algorithms were exponential. We present experimental results showing that this algorithm is also efficient in practice and can therefore form the basis for a game playing system.
15,391,918
1cd2240aadc2334b07db10b126f369f1205ad628
Smooth epsiloon-Insensitive Regression by Loss Symmetrization
null
12,925,619
2a8b3ed431f30b29875b6cba6ad3465c16488a3d
The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length
We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KL-divergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in human-machine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second application we construct a simple stochastic model for E. coli DNA.
52,809,613
333bda2c79984e34ba985828c086d55bda56cd73
Phoneme Alignment using Large Margin Techniques
We propose an alignment method which is based on recent advances in kernel machines and large margin classifiers for sequences [13, 12], which in turn build on the pioneering work of Vapnik and colleagues [15, 4]. The alignment function we devise is based on mapping the speech signal and its phoneme representation along with the target alignment into an abstract vector-space. Building on techniques used for learning SVMs, our alignment function distills to a classifier in this vector-space which is aimed at separating correct alignments from incorrect ones. We describe a simple iterative algorithm for learning the alignment function and discuss its formal properties. Experiments with the TIMIT corpus show that our method outperforms the best performing HMM-based approach [1].
15,942,021
3bda2eaf7a7d33b1cf924610d1daa2c7b9eff989
Spikernels: Predicting Arm Movements by Embedding Population Spike Rate Patterns in Inner-Product Spaces
Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this letter is the construction of biologically motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an abstract vector space in which we can perform various prediction tasks. We discuss in detail the derivation of Spikernels and describe an efficient algorithm for computing their value on any two sequences of neural population spike counts. We demonstrate the merits of our modeling approach by comparing the Spikernel to various standard kernels in the task of predicting hand movement velocities from cortical recordings. All of the kernels that we tested in our experiments outperform the standard scalar product used in linear regression, with the Spikernel consistently achieving the best performance.
15,575,074
5b92e4b3b7dbace954df4cf1bc2a82613683cc0f
Data-Driven Online to Batch Conversions
Online learning algorithms are typically fast, memory efficient, and simple to implement. However, many common learning problems fit more naturally in the batch learning setting. The power of online learning algorithms can be exploited in batch settings by using online-to-batch conversions techniques which build a new batch algorithm from an existing online algorithm. We first give a unified overview of three existing online-to-batch conversion techniques which do not use training data in the conversion process. We then build upon these data-independent conversions to derive and analyze data-driven conversions. Our conversions find hypotheses with a small risk by explicitly minimizing data-dependent generalization bounds. We experimentally demonstrate the usefulness of our approach and in particular show that the data-driven conversions consistently outperform the data-independent conversions.
14,806,101
649005af00f176c64467c680c972c8e38712b447
Online Ranking by Projecting
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the Each Movie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.
597,466
a3543c4aec8a015442102886ae31323a3448a674
Phoneme alignment based on discriminative learning
We propose a new paradigm for aligning a phoneme sequence of a speech utterance with its acoustical signal counterpart. In contrast to common HMM-based approaches, our method employs a discriminative learning procedure in which the learning phase is tightly coupled with the alignment task at hand. The alignment function we devise is based on mapping the input acousticsymbolic representations of the speech utterance along with the target alignment into an abstract vector space. We suggest a specific mapping into the abstract vector-space which utilizes standard speech features (e.g. spectral distances) as well as confidence outputs of a framewise phoneme classifier. Building on techniques used for large margin methods for predicting whole sequences, our alignment function distills to a classifier in the abstract vector-space which separates correct alignments from incorrect ones. We describe a simple iterative algorithm for learning the alignment function and discuss its formal properties. Experiments with the TIMIT corpus show that our method outperforms the current state-of-the-art approaches.
10,065,410
a5fa0b1743ae0dc3a7f98a35f6ec2c4e46a8a469
Loss Bounds for Online Category Ranking
Category ranking is the task of ordering labels with respect to their relevance to an input instance. In this paper we describe and analyze several algorithms for online category ranking where the instances are revealed in a sequential manner. We describe additive and multiplicative updates which constitute the core of the learning algorithms. The updates are derived by casting a constrained optimization problem for each new instance. We derive loss bounds for the algorithms by using the properties of the dual solution while imposing additional constraints on the dual form. Finally, we outline and analyze the convergence of a general update that can be employed with any Bregman divergence.
14,821,599
ba2747e918737aa6ce605d48f09bd15d49cfa76b
A New Perspective on an Old Perceptron Algorithm
We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptron-style update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for our algorithm. As a byproduct we obtain a new mistake bound for the Perceptron algorithm in the inseparable case. We describe a multiclass extension of the algorithm. This extension is used in an experimental evaluation in which we compare the proposed algorithm to the Perceptron algorithm.
10,315,921
df5153e3c98b1d1320ab0e4761234ee5dc3a839b
The Forgetron: A Kernel-Based Perceptron on a Fixed Budget
The Perceptron algorithm, despite its simplicity, often performs well on online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may grow unboundedly. In this paper we present and analyze the Forgetron algorithm for kernel-based online learning on a fixed memory budget. To our knowledge, this is the first online learning algorithm which, on one hand, maintains a strict limit on the number of examples it stores while, on the other hand, entertains a relative mistake bound. In addition to the formal results, we also present experiments with real datasets which underscore the merits of our approach.
7,029,535