diff --git "a/SubCora.json" "b/SubCora.json"
new file mode 100644--- /dev/null
+++ "b/SubCora.json"
@@ -0,0 +1,15760 @@
+[
+ {
+ "node_id": 0,
+ "label": 2,
+ "text": "Title: The megaprior heuristic for discovering protein sequence patterns \nAbstract: Several computer algorithms for discovering patterns in groups of protein sequences are in use that are based on fitting the parameters of a statistical model to a group of related sequences. These include hidden Markov model (HMM) algorithms for multiple sequence alignment, and the MEME and Gibbs sampler algorithms for discovering motifs. These algorithms are sometimes prone to producing models that are incorrect because two or more patterns have been combined. The statistical model produced in this situation is a convex combination (weighted average) of two or more different models. This paper presents a solution to the problem of convex combinations in the form of a heuristic based on using extremely low variance Dirichlet mixture priors as part of the statistical model. This heuristic, which we call the megaprior heuristic, increases the strength (i.e., decreases the variance) of the prior in proportion to the size of the sequence dataset. This causes each column in the final model to strongly resemble the mean of a single component of the prior, regardless of the size of the dataset. We describe the cause of the convex combination problem, analyze it mathematically, motivate and describe the implementation of the megaprior heuristic, and show how it can effectively eliminate the problem of convex combinations in protein sequence pattern discovery. ",
+ "neighbors": [
+ 3,
+ 150,
+ 244,
+ 314
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1,
+ "label": 4,
+ "text": "Title: Submitted to NIPS96, Section: Applications. Preference: Oral presentation Reinforcement Learning for Dynamic Channel Allocation in\nAbstract: In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns. We present results on a large cellular system In cellular communication systems, an important problem is to allocate the communication resource (bandwidth) so as to maximize the service provided to a set of mobile callers whose demand for service changes stochastically. A given geographical area is divided into mutually disjoint cells, and each cell serves the calls that are within its boundaries (see Figure 1a). The total system bandwidth is divided into channels, with each channel centered around a frequency. Each channel can be used simultaneously at different cells, provided these cells are sufficiently separated spatially, so that there is no interference between them. The minimum separation distance between simultaneous reuse of the same channel is called the channel reuse constraint . When a call requests service in a given cell either a free channel (one that does not violate the channel reuse constraint) may be assigned to the call, or else the call is blocked from the system; this will happen if no free channel can be found. Also, when a mobile caller crosses from one cell to another, the call is \"handed off\" to the cell of entry; that is, a new free channel is provided to the call at the new cell. If no such channel is available, the call must be dropped/disconnected from the system. One objective of a channel allocation policy is to allocate the available channels to calls so that the number of blocked calls is minimized. An additional objective is to minimize the number of calls that are dropped when they are handed off to a busy cell. These two objectives must be weighted appropriately to reflect their relative importance, since dropping existing calls is generally more undesirable than blocking new calls. with approximately 70 49 states.",
+ "neighbors": [
+ 232,
+ 269,
+ 318,
+ 327
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 2,
+ "label": 6,
+ "text": "Title: Online Learning versus O*ine Learning \nAbstract: We present an off-line variant of the mistake-bound model of learning. Just like in the well studied on-line model, a learner in the offline model has to learn an unknown concept from a sequence of elements of the instance space on which he makes \"guess and test\" trials. In both models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. We give a combinatorial characterization of the number of mistakes in the off-line model. We apply this characterization to solve several natural questions that arise for the new model. First, we compare the mistake bounds of an off-line learner to those of a learner learning the same concept classes in the on-line scenario. We show that the number of mistakes in the on-line learning is at most a log n factor more than the off-line learning, where n is the length of the sequence. In addition, we show that if there is an off-line algorithm that does not make more than a constant number of mistakes for each sequence then there is an online algorithm that also does not make more than a constant number of mistakes. The second issue we address is the effect of the ordering of the elements on the number of mistakes of an off-line learner. It turns out that there are sequences on which an off-line learner can guarantee at most one mistake, yet a permutation of the same sequence forces him to err on many elements. We prove, however, that the gap, between the off-line mistake bounds on permutations of the same sequence of n-many elements, cannot be larger than a multiplicative factor of log n, and we present examples that obtain such a gap. ",
+ "neighbors": [
+ 179,
+ 255,
+ 275,
+ 442
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 3,
+ "label": 2,
+ "text": "Title: Hidden Markov Models in Computational Biology: Applications to Protein Modeling UCSC-CRL-93-32 Keywords: Hidden Markov Models,\nAbstract: Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PRO-FILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage ",
+ "neighbors": [
+ 0,
+ 16,
+ 131,
+ 137,
+ 150,
+ 156,
+ 221,
+ 244,
+ 248,
+ 314,
+ 358,
+ 411,
+ 424,
+ 431
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 4,
+ "label": 2,
+ "text": "Title: Back Propagation is Sensitive to Initial Conditions \nAbstract: This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ",
+ "neighbors": [
+ 70,
+ 82,
+ 132,
+ 146,
+ 188,
+ 226,
+ 308,
+ 407
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 5,
+ "label": 4,
+ "text": "Title: Exploration in Active Learning \nAbstract: This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ",
+ "neighbors": [
+ 264,
+ 318,
+ 328
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 6,
+ "label": 2,
+ "text": "Title: Topography And Ocular Dominance: A Model Exploring Positive Correlations \nAbstract: The map from eye to brain in vertebrates is topographic, i.e. neighbouring points in the eye map to neighbouring points in the brain. In addition, when two eyes innervate the same target structure, the two sets of fibres segregate to form ocular dominance stripes. Experimental evidence from the frog and goldfish suggests that these two phenomena may be subserved by the same mechanisms. We present a computational model that addresses the formation of both topography and ocular dominance. The model is based on a form of competitive learning with subtractive enforcement of a weight normalization rule. Inputs to the model are distributed patterns of activity presented simultaneously in both eyes. An important aspect of this model is that ocular dominance segregation can occur when the two eyes are positively correlated, whereas previous models have tended to assume zero or negative correlations between the eyes. This allows investigation of the dependence of the pattern of stripes on the degree of correlation between the eyes: we find that increasing correlation leads to narrower stripes. Experiments are suggested to test this prediction.",
+ "neighbors": [
+ 69,
+ 240,
+ 430,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 7,
+ "label": 2,
+ "text": "Title: Validation of Average Error Rate Over Classifiers \nAbstract: We examine methods to estimate the average and variance of test error rates over a set of classifiers. We begin with the process of drawing a classifier at random for each example. Given validation data, the average test error rate can be estimated as if validating a single classifier. Given the test example inputs, the variance can be computed exactly. Next, we consider the process of drawing a classifier at random and using it on all examples. Once again, the expected test error rate can be validated as if validating a single classifier. However, the variance must be estimated by validating all classifers, which yields loose or uncertain bounds. ",
+ "neighbors": [
+ 40
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 8,
+ "label": 6,
+ "text": "Title: 25 Learning in Hybrid Noise Environments Using Statistical Queries \nAbstract: We consider formal models of learning from noisy data. Specifically, we focus on learning in the probability approximately correct model as defined by Valiant. Two of the most widely studied models of noise in this setting have been classification noise and malicious errors. However, a more realistic model combining the two types of noise has not been formalized. We define a learning environment based on a natural combination of these two noise models. We first show that hypothesis testing is possible in this model. We next describe a simple technique for learning in this model, and then describe a more powerful technique based on statistical query learning. We show that the noise tolerance of this improved technique is roughly optimal with respect to the desired learning accuracy and that it provides a smooth tradeoff between the tolerable amounts of the two types of noise. Finally, we show that statistical query simulation yields learning algorithms for other combinations of noise models, thus demonstrating that statistical query specification truly An important goal of research in machine learning is to determine which tasks can be automated, and for those which can, to determine their information and computation requirements. One way to answer these questions is through the development and investigation of formal models of machine learning which capture the task of learning under plausible assumptions. In this work, we consider the formal model of learning from examples called \"probably approximately correct\" (PAC) learning as defined by Valiant [Val84]. In this setting, a learner attempts to approximate an unknown target concept simply by viewing positive and negative examples of the concept. An adversary chooses, from some specified function class, a hidden f0; 1g-valued target function defined over some specified domain of examples and chooses a probability distribution over this domain. The goal of the learner is to output in both polynomial time and with high probability, an hypothesis which is \"close\" to the target function with respect to the distribution of examples. The learner gains information about the target function and distribution by interacting with an example oracle. At each request by the learner, this oracle draws an example randomly according to the hidden distribution, labels it according to the hidden target function, and returns the labelled example to the learner. A class of functions F is said to be PAC learnable if captures the generic fault tolerance of a learning algorithm.",
+ "neighbors": [
+ 155
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 9,
+ "label": 4,
+ "text": "Title: Decision Tree Function Approximation in Reinforcement Learning \nAbstract: We present a decision tree based approach to function approximation in reinforcement learning. We compare our approach with table lookup and a neural network function approximator on three problems: the well known mountain car and pole balance problems as well as a simulated automobile race car. We find that the decision tree can provide better learning performance than the neural network function approximation and can solve large problems that are infeasible using table lookup.",
+ "neighbors": [
+ 169,
+ 245,
+ 329,
+ 776
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 10,
+ "label": 1,
+ "text": "Title: Discovering Complex Othello Strategies Through Evolutionary Neural Networks \nAbstract: An approach to develop new game playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an ff-fi search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment. ",
+ "neighbors": [
+ 70,
+ 91,
+ 108,
+ 981,
+ 1176
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 11,
+ "label": 3,
+ "text": "Title: Applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses \nAbstract: Technical Report No. 670 December, 1997 ",
+ "neighbors": [
+ 21,
+ 441
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 12,
+ "label": 2,
+ "text": "Title: Neural Network Applicability: Classifying the Problem Space \nAbstract: The tremendous current effort to propose neurally inspired methods of computation forces closer scrutiny of real world application potential of these models. This paper categorizes applications into classes and particularly discusses features of applications which make them efficiently amenable to neural network methods. Computational machines do deterministic mappings of inputs to outputs and many computational mechanisms have been proposed for problem solutions. Neural network features include parallel execution, adaptive learning, generalization, and fault tolerance. Often, much effort is given to a model and applications which can already be implemented in a much more efficient way with an alternate technology. Neural networks are potentially powerful devices for many classes of applications, but not all. However, it is proposed that the class of applications for which neural networks are efficient is both large and commonly occurring in nature. Comparison of supervised, unsupervised, and generalizing systems is also included. ",
+ "neighbors": [
+ 432,
+ 641,
+ 1318
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 13,
+ "label": 3,
+ "text": "Title: Formal Rules for Selecting Prior Distributions: A Review and Annotated Bibliography \nAbstract: Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet, in practice, most Bayesian analyses are performed with so-called \"noninfor-mative\" priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors, and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his point of view about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly; when sample sizes are small (relative to the number of parameters being estimated) it is dangerous to put faith in any \"default\" solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated bibliography. fl Robert E. Kass is Professor and Larry Wasserman is Associate Professor, Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-2717. The work of both authors was supported by NSF grant DMS-9005858 and NIH grant R01-CA54852-01. The authors thank Nick Polson for helping with a few annotations, and Jim Berger, Teddy Seidenfeld and Arnold Zellner for useful comments and discussion. ",
+ "neighbors": [
+ 47
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 14,
+ "label": 5,
+ "text": "Title: Stochastically Guided Disjunctive Version Space Learning \nAbstract: This paper presents an incremental concept learning approach to identiflcation of concepts with high overall accuracy. The main idea is to address the concept overlap as a central problem when learning multiple descriptions. Many traditional inductive algorithms, as those from the disjunctive version space family considered here, face this problem. The approach focuses on combinations of confldent, possibly overlapping, concepts with an original stochastic complexity formula. The focusing is e-cient because it is organized as a simulated annealing-based beam search. The experiments show that the approach is especially suitable for developing incremental learning algorithms with the following advantages: flrst, it generates highly accurate concepts; second, it overcomes to a certain degree the sensitivity to the order of examples; and third, it handles noisy examples. ",
+ "neighbors": [
+ 219,
+ 239,
+ 241
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 15,
+ "label": 0,
+ "text": "Title: Towards More Creative Case-Based Design Systems \nAbstract: Case-based reasoning (CBR) has a great deal to offer in supporting creative design, particularly processes that rely heavily on previous design experience, such as framing the problem and evaluating design alternatives. However, most existing CBR systems are not living up to their potential. They tend to adapt and reuse old solutions in routine ways, producing robust but uninspired results. Little research effort has been directed towards the kinds of situation assessment, evaluation, and assimilation processes that facilitate the exploration of ideas and the elaboration and redefinition of problems that are crucial to creative design. Also, their typically rigid control structures do not facilitate the kinds of strategic control and opportunism inherent in creative reasoning. In this paper, we describe the types of behavior we would like case-based design systems to support, based on a study of designers working on a mechanical engineering problem. We show how the standard CBR framework should be extended and we describe an architecture we are developing to experiment with these ideas. 1 ",
+ "neighbors": [
+ 130,
+ 395,
+ 649
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 16,
+ "label": 2,
+ "text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
+ "neighbors": [
+ 3,
+ 143,
+ 633
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 17,
+ "label": 2,
+ "text": "Title: Learning Generative Models with the Up-Propagation Algorithm \nAbstract: Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. The algorithm is benchmarked against principal component analysis in In his doctrine of unconscious inference, Helmholtz argued that perceptions are formed by the interaction of bottom-up sensory data with top-down expectations. According to one interpretation of this doctrine, perception is a procedure of sequential hypothesis testing. We propose a new algorithm, called up-propagation, that realizes this interpretation in layered neural networks. It uses top-down connections to generate hypotheses, and bottom-up connections to revise them. It is important to understand the difference between up-propagation and its ancestor, the backpropagation algorithm[1]. Backpropagation is a learning algorithm for recognition models. As shown in Figure 1a, bottom-up connections recognize patterns, while top-down connections propagate an error signal that is used to learn the recognition model. In contrast, up-propagation is an algorithm for inverting and learning generative models, as shown in Figure 1b. Top-down connections generate patterns from a set of hidden variables. Sensory input is processed by inverting the generative model, recovering hidden variables that could have generated the sensory data. This operation is called either pattern recognition or pattern analysis, depending on the meaning of the hidden variables. Inversion of the generative model is done iteratively, through a negative feedback loop driven by an error signal from the bottom-up connections. The error signal is also used for learning the connections experiments on images of handwritten digits.",
+ "neighbors": [
+ 143,
+ 889,
+ 944
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 18,
+ "label": 4,
+ "text": "Title: Using a Case Base of Surfaces to Speed-Up Reinforcement Learning \nAbstract: This paper demonstrates the exploitation of certain vision processing techniques to index into a case base of surfaces. The surfaces are the result of reinforcement learning and represent the optimum choice of actions to achieve some goal from anywhere in the state space. This paper shows how strong features that occur in the interaction of the system with its environment can be detected early in the learning process. Such features allow the system to identify when an identical, or very similar, task has been solved previously and to retrieve the relevant surface. This results in an orders of magnitude increase in learning rate. ",
+ "neighbors": [
+ 37,
+ 322,
+ 327,
+ 328
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 19,
+ "label": 2,
+ "text": "Title: Generative Models for Discovering Sparse Distributed Representations \nAbstract: We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demon strate that the network learns to extract sparse, distributed, hierarchical representations.",
+ "neighbors": [
+ 149,
+ 430,
+ 608,
+ 889,
+ 944,
+ 1044,
+ 1061,
+ 1102,
+ 1234
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 20,
+ "label": 1,
+ "text": "Title: HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS \nAbstract: In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper demonstrates that while SANE can solve easy tasks very quickly, it often stalls in larger problems. A hierarchical approach to neuro-evolution is presented that overcomes SANE's difficulties by integrating both a neuron-level exploratory search and a network-level exploitive search. In a robot arm manipulation task, the hierarchical approach outperforms both a neuron-based search and a network-based search. ",
+ "neighbors": [
+ 123,
+ 325,
+ 940
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 21,
+ "label": 3,
+ "text": "Title: Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review \nAbstract: A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. Mary Kathryn Cowles is Assistant Professor of Biostatistics, Harvard School of Public Health, Boston, MA 02115. Bradley P. Carlin is Associate Professor, Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455. Much of the work was done while the first author was a graduate student in the Divison of Biostatistics at the University of Minnesota and then Assistant Professor, Biostatistics Section, Department of Preventive and Societal Medicine, University of Nebraska Medical Center, Omaha, NE 68198. The work of both authors was supported in part by National Institute of Allergy and Infectious Diseases FIRST Award 1-R29-AI33466. The authors thank the developers of the diagnostics studied here for sharing their insights, experiences, and software, and Drs. Thomas Louis and Luke Tierney for helpful discussions and suggestions which greatly improved the manuscript. ",
+ "neighbors": [
+ 11,
+ 202,
+ 418,
+ 517,
+ 518,
+ 947,
+ 949,
+ 957,
+ 1066,
+ 1257
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 22,
+ "label": 1,
+ "text": "Title: Evolutionary Module Acquisition \nAbstract: A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. Mary Kathryn Cowles is Assistant Professor of Biostatistics, Harvard School of Public Health, Boston, MA 02115. Bradley P. Carlin is Associate Professor, Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455. Much of the work was done while the first author was a graduate student in the Divison of Biostatistics at the University of Minnesota and then Assistant Professor, Biostatistics Section, Department of Preventive and Societal Medicine, University of Nebraska Medical Center, Omaha, NE 68198. The work of both authors was supported in part by National Institute of Allergy and Infectious Diseases FIRST Award 1-R29-AI33466. The authors thank the developers of the diagnostics studied here for sharing their insights, experiences, and software, and Drs. Thomas Louis and Luke Tierney for helpful discussions and suggestions which greatly improved the manuscript. ",
+ "neighbors": [
+ 91,
+ 106,
+ 107,
+ 462
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 23,
+ "label": 0,
+ "text": "Title: Competitive Anti-Hebbian Learning of Invariants \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
+ "neighbors": [
+ 50,
+ 1248
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 24,
+ "label": 2,
+ "text": "Title: The Pandemonium System of Reflective Agents \nAbstract: In IEEE Transactions on Neural Networks, 7(1):97-106, 1996 Also available as GMD report #794 ",
+ "neighbors": [
+ 135,
+ 174,
+ 503
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 25,
+ "label": 3,
+ "text": "Title: Sampling from Multimodal Distributions Using Tempered Transitions \nAbstract: Technical Report No. 9421, Department of Statistics, University of Toronto Abstract. I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \"simulated tempering\", the \"tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \"deceptive\". ",
+ "neighbors": [
+ 418,
+ 977,
+ 1257
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 26,
+ "label": 0,
+ "text": "Title: Abstract \nAbstract: Metacognition addresses the issues of knowledge about cognition and regulating cognition. We argue that the regulation process should be improved with growing experience. Therefore mental models are needed which facilitate the re-use of previous regulation processes. We will satisfy this requirement by describing a case-based approach to Introspection Planning which utilises previous experience obtained during reasoning at the meta-level and at the object level. The introspection plans used in this approach support various metacognitive tasks which are identified by the generation of self-questions. As an example of introspection planning, the metacognitive behaviour of our system, IULIAN, is described. ",
+ "neighbors": [
+ 338,
+ 340,
+ 376
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 27,
+ "label": 1,
+ "text": "Title: DISTRIBUTED GENETIC ALGORITHMS FOR PARTITIONING UNIFORM GRIDS \nAbstract: The fault hierarchy representation is widely used in expert systems for the diagnosis of complex mechanical devices. On the assumption that an appropriate bias for a knowledge representation language is also an appropriate bias for learning in this domain, we have developed a theory revision method that operates directly on a fault hierarchy. This task presents several challenges: A typical training instance is missing most feature values, and the pattern of missing features is significant, rather than merely an effect of noise. Moreover, the accuracy of a candidate theory is measured by considering both the sequence of tests required to arrive at a diagnosis and its agreement with the diagnostic endpoints provided by an expert. This paper first describes the algorithm for theory revision of fault hierarchies that was designed to address these challenges, then discusses its application in knowledge base maintenance and reports on experiments that use to revise a fielded diagnostic system. ",
+ "neighbors": [
+ 138,
+ 466
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 28,
+ "label": 1,
+ "text": "Title: A Comparison of Selection Schemes used in Genetic Algorithms \nAbstract: TIK-Report Nr. 11, December 1995 Version 2 (2. Edition) ",
+ "neighbors": [
+ 91,
+ 490,
+ 978,
+ 999
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 29,
+ "label": 6,
+ "text": "Title: Self bounding learning algorithms \nAbstract: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Kearns [10]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search.",
+ "neighbors": [
+ 452,
+ 556,
+ 587
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 30,
+ "label": 4,
+ "text": "Title: Markov Decision Processes in Large State Spaces \nAbstract: In this paper we propose a new framework for studying Markov decision processes (MDPs), based on ideas from statistical mechanics. The goal of learning in MDPs is to find a policy that yields the maximum expected return over time. In choosing policies, agents must therefore weigh the prospects of short-term versus long-term gains. We study a simple MDP in which the agent must constantly decide between exploratory jumps and local reward mining in state space. The number of policies to choose from grows exponentially with the size of the state space, N . We view the expected returns as defining an energy landscape over policy space. Methods from statistical mechanics are used to analyze this landscape in the thermodynamic limit N ! 1. We calculate the overall distribution of expected returns, as well as the distribution of returns for policies at a fixed Hamming distance from the optimal one. We briefly discuss the problem of learning optimal policies from empirical estimates of the expected return. As a first step, we relate our findings for the entropy to the limit of high-temperature learning. Numerical simulations support the theoretical results. ",
+ "neighbors": [
+ 178,
+ 318,
+ 327,
+ 556,
+ 811
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 31,
+ "label": 2,
+ "text": "Title: Neural Networks with Quadratic VC Dimension \nAbstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. ",
+ "neighbors": [
+ 307,
+ 565,
+ 1025
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 32,
+ "label": 2,
+ "text": "Title: SELF-ADAPTIVE NEURAL NETWORKS FOR BLIND SEPARATION OF SOURCES \nAbstract: Novel on-line learning algorithms with self adaptive learning rates (parameters) for blind separation of signals are proposed. The main motivation for development of new learning rules is to improve convergence speed and to reduce cross-talking, especially for non-stationary signals. Furthermore, we have discovered that under some conditions the proposed neural network models with associated learning algorithms exhibit a random switch of attention, i.e. they have ability of chaotic or random switching or cross-over of output signals in such way that a specified separated signal may appear at various outputs at different time windows. Validity, performance and dynamic properties of the proposed learning algorithms are investigated by computer simulation experiments. ",
+ "neighbors": [
+ 331,
+ 335,
+ 487,
+ 505,
+ 506,
+ 845
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 33,
+ "label": 4,
+ "text": "Title: The Efficient Learning of Multiple Task Sequences \nAbstract: I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs.",
+ "neighbors": [
+ 145,
+ 318,
+ 324,
+ 327,
+ 400
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 34,
+ "label": 3,
+ "text": "Title: Context-Specific Independence in Bayesian Networks \nAbstract: Bayesiannetworks provide a languagefor qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.",
+ "neighbors": [
+ 189,
+ 192,
+ 238,
+ 546,
+ 1246
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 35,
+ "label": 0,
+ "text": "Title: Integrating Creativity and Reading: A Functional Approach \nAbstract: Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories. ",
+ "neighbors": [
+ 167,
+ 278,
+ 340
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 36,
+ "label": 1,
+ "text": "Title: Integrating Creativity and Reading: A Functional Approach \nAbstract: dvitps ERROR: reno98b.dvi @ puccini.rutgers.edu Certain fonts that you requested in your dvi file could not be found on the system. In order to print your document, other fonts that are installed were substituted for these missing fonts. Below is a list of the substitutions that were made. /usr/local/lib/fonts/gf/cmbx12.518pk substituted for cmbx12.519pk ",
+ "neighbors": [
+ 428,
+ 429
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 37,
+ "label": 0,
+ "text": "Title: (1994); Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. Case-Based Reasoning: Foundational Issues, Methodological\nAbstract: Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe, as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. ",
+ "neighbors": [
+ 18,
+ 102,
+ 166,
+ 1116,
+ 1132,
+ 1193,
+ 1219
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 38,
+ "label": 2,
+ "text": "Title: DISCOVERING NEURAL NETS WITH LOW KOLMOGOROV COMPLEXITY AND HIGH GENERALIZATION CAPABILITY Neural Networks 10(5):857-873, 1997 \nAbstract: Many neural net learning algorithms aim at finding \"simple\" nets to explain training data. The expectation is: the \"simpler\" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for \"simplicity\" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the \"Bayesian\" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding \"algorithmically simple\" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. For a given problem, solution candidates are computed by efficient \"self-sizing\" programs that influence their own runtime and storage size. The probabilistic search algorithm finds the \"good\" programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering \"algorithmically simple\" neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and \"incremental learning\" feasible.",
+ "neighbors": [
+ 561,
+ 997,
+ 1074
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 39,
+ "label": 6,
+ "text": "Title: Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods \nAbstract: One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition. ",
+ "neighbors": [
+ 147,
+ 540,
+ 1067
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 40,
+ "label": 3,
+ "text": "Title: Hierarchical Mixtures of Experts and the EM Algorithm \nAbstract: We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
+ "neighbors": [
+ 7,
+ 84,
+ 109,
+ 145,
+ 154,
+ 180,
+ 363,
+ 396,
+ 457,
+ 503,
+ 510,
+ 526,
+ 559,
+ 580,
+ 585,
+ 625,
+ 684,
+ 1041,
+ 1118,
+ 1234,
+ 1244,
+ 1284
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 41,
+ "label": 0,
+ "text": "Title: A Memory Model for Case Retrieval by Activation Passing \nAbstract: We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
+ "neighbors": [
+ 166,
+ 637,
+ 761,
+ 1004,
+ 1005,
+ 1116,
+ 1196
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 42,
+ "label": 3,
+ "text": "Title: A VIEW OF THE EM ALGORITHM THAT JUSTIFIES INCREMENTAL, SPARSE, AND OTHER VARIANTS \nAbstract: The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. ",
+ "neighbors": [
+ 71,
+ 100,
+ 143,
+ 504,
+ 559,
+ 1045,
+ 1203,
+ 1234,
+ 1288
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 43,
+ "label": 6,
+ "text": "Title: Probabilistic Networks: New Models and New Methods \nAbstract: In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference of an interpolant and an input-dependent noise level. ",
+ "neighbors": [
+ 86,
+ 122,
+ 267,
+ 323,
+ 444,
+ 1344
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 44,
+ "label": 6,
+ "text": "Title: A hierarchical ensemble of decision trees applied to classifying data from a psychological experiment \nAbstract: Classifying by hand complex data coming from psychology experiments can be a long and difficult task, because of the quantity of data to classify and the amount of training it may require. One way to alleviate this problem is to use machine learning techniques. We built a classifier based on decision trees that reproduces the classifying process used by two humans on a sample of data and that learns how to classify unseen data. The automatic classifier proved to be more accurate, more constant and much faster than classification by hand. ",
+ "neighbors": [
+ 245
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 45,
+ "label": 4,
+ "text": "Title: A Reinforcement Learning Approach to Job-shop Scheduling \nAbstract: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm T D() is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step looka-head search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD sched-uler performs better than the best known existing algorithm for this task|Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.",
+ "neighbors": [
+ 136,
+ 170,
+ 177,
+ 232,
+ 315,
+ 327,
+ 776,
+ 800,
+ 865
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 46,
+ "label": 2,
+ "text": "Title: A Neural Network Pole Balancer that Learns and Operates on a Real Robot in Real Time \nAbstract: A neural network approach to the classic inverted pendulum task is presented. This task is the task of keeping a rigid pole, hinged to a cart and free to fall in a plane, in a roughly vertical orientation by moving the cart horizontally in the plane while keeping the cart within some maximum distance of its starting position. This task constitutes a difficult control problem if the parameters of the cart-pole system are not known precisely or are variable. It also forms the basis of an even more complex control-learning problem if the controller must learn the proper actions for successfully balancing the pole given only the current state of the system and a failure signal when the pole angle from the vertical becomes too great or the cart exceeds one of the boundaries placed on its position. The approach presented is demonstrated to be effective for the real-time control of a small, self-contained mini-robot, specially outfitted for the task. Origins and details of the learning scheme, specifics of the mini-robot hardware, and results of actual learning trials are presented. ",
+ "neighbors": [
+ 169,
+ 432
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 47,
+ "label": 3,
+ "text": "Title: Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models \nAbstract: Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994 ",
+ "neighbors": [
+ 13,
+ 85,
+ 199,
+ 570,
+ 698,
+ 757
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 48,
+ "label": 4,
+ "text": "Title: Q-Learning with Hidden-Unit Restarting \nAbstract: Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to \"restart\" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm is tested in a Q-learning network that learns to solve an inverted pendulum problem. Solutions are found faster on average with the restart algorithm than without it.",
+ "neighbors": [
+ 169,
+ 263,
+ 318,
+ 327
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 49,
+ "label": 5,
+ "text": "Title: A Hybrid Nearest-Neighbor and Nearest-Hyperrectangle Algorithm \nAbstract: We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results. ",
+ "neighbors": [
+ 220,
+ 417,
+ 1078
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 50,
+ "label": 6,
+ "text": "Title: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation \nAbstract: Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. ",
+ "neighbors": [
+ 23,
+ 66,
+ 118,
+ 125,
+ 144,
+ 215,
+ 343
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 51,
+ "label": 2,
+ "text": "Title: IEEE Learning the Semantic Similarity of Reusable Software Components \nAbstract: Properly structured software libraries are crucial for the success of software reuse. Specifically, the structure of the software library ought to reect the functional similarity of the stored software components in order to facilitate the retrieval process. We propose the application of artificial neural network technology to achieve such a structured library. In more detail, we utilize an artificial neural network adhering to the unsupervised learning paradigm. The distinctive feature of this very model is to make the semantic relationship between the stored software components geographically explicit. Thus, the actual user of the software library gets a notion of the semantic relationship between the components in terms of their geographical closeness. ",
+ "neighbors": [
+ 430,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 52,
+ "label": 4,
+ "text": "Title: Learning Analytically and Inductively \nAbstract: Learning is a fundamental component of intelligence, and a key consideration in designing cognitive architectures such as Soar [ Laird et al., 1986 ] . This chapter considers the question of what constitutes an appropriate general-purpose learning mechanism. We are interested in mechanisms that might explain and reproduce the rich variety of learning capabilities of humans, ranging from learning perceptual-motor skills such as how to ride a bicycle, to learning highly cognitive tasks such as how to play chess. Research on learning in fields such as cognitive science, artificial intelligence, neurobiology, and statistics has led to the identification of two distinct classes of learning methods: inductive and analytic. Inductive methods, such as neural network Backpropagation, learn general laws by finding statistical correlations and regularities among a large set of training examples. In contrast, analytical methods, such as Explanation-Based Learning, acquire general laws from many fewer training examples. They rely instead on prior knowledge to analyze individual training examples in detail, then use this analysis to distinguish relevant example features from the irrelevant. The question considered in this chapter is how to best combine inductive and analytical learning in an architecture that seeks to cover the range of learning exhibited by intelligent systems such as humans. We present a specific learning mechanism, Explanation Based Neural Network learning (EBNN), that blends these two types of learning, and present experimental results demonstrating its ability to learn control strategies for a mobile robot using ",
+ "neighbors": [
+ 72,
+ 318,
+ 327,
+ 708
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 53,
+ "label": 2,
+ "text": "Title: Adaptive Tuning of Numerical Weather Prediction Models: Simultaneous Estimation of Weighting, Smoothing and Physical Parameters 1 \nAbstract: In recent years, case-based reasoning has been demonstrated to be highly useful for problem solving in complex domains. Also, mixed paradigm approaches emerged for combining CBR and induction techniques aiming at verifying the knowledge and/or building an efficient case memory. However, in complex domains induction over the whole problem space is often not possible or too time consuming. In this paper, an approach is presented which (owing to a close interaction with the CBR part) attempts to induce rules only for a particular context, i.e. for a problem just being solved by a CBR-oriented system. These rules may then be used for indexing purposes or similarity assessment in order to support the CBR process in the future. ",
+ "neighbors": [
+ 246
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 54,
+ "label": 6,
+ "text": "Title: Planning and Learning in an Adversarial Robotic Game \nAbstract: 1 This paper demonstrates the tandem use of a finite automata learning algorithm and a utility planner for an adversarial robotic domain. For many applications, robot agents need to predict the movement of objects in the environment and plan to avoid them. When the robot has no reasoning model of the object, machine learning techniques can be used to generate one. In our project, we learn a DFA model of an adversarial robot and use the automaton to predict the next move of the adversary. The robot agent plans a path to avoid the adversary at the predicted location while fulfilling the goal requirements. ",
+ "neighbors": [
+ 359,
+ 1053,
+ 1349
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 55,
+ "label": 3,
+ "text": "Title: Bayesian Forecasting of Multinomial Time Series through Conditionally Gaussian Dynamic Models \nAbstract: Claudia Cargnoni is with the Dipartimento Statistico, Universita di Firenze, 50100 Firenze, Italy. Peter Muller is Assistant Professor, and Mike West is Professor, in the Institute of Statistics and Decision Sciences at Duke University, Durham NC 27708-0251. Research of Cargnoni was performed while visiting ISDS during 1995. Muller and West were partially supported by NSF under grant DMS-9305699. ",
+ "neighbors": [
+ 441,
+ 706,
+ 1003
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 56,
+ "label": 1,
+ "text": "Title: Using Markov Chains to Analyze GAFOs \nAbstract: Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided. ",
+ "neighbors": [
+ 440,
+ 902
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 57,
+ "label": 2,
+ "text": "Title: Adaptive Noise Injection for Input Variables Relevance Determination \nAbstract: In this paper we consider the application of training with noise in multi-layer perceptron to input variables relevance determination. Noise injection is modified in order to penalize irrelevant features. The proposed algorithm is attractive as it requires the tuning of a single parameter. This parameter controls the penalization of the inputs together with the complexity of the model. After the presentation of the method, experimental evidences are given on simulated data sets.",
+ "neighbors": [
+ 191,
+ 631
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 58,
+ "label": 2,
+ "text": "Title: Multivariate versus Univariate Decision Trees \nAbstract: COINS Technical Report 92-8 January 1992 Abstract In this paper we present a new multivariate decision tree algorithm LMDT, which combines linear machines with decision trees. LMDT constructs each test in a decision tree by training a linear machine and then eliminating irrelevant and noisy variables in a controlled manner. To examine LMDT's ability to find good generalizations we present results for a variety of domains. We compare LMDT empirically to a univariate decision tree algorithm and observe that when multivariate tests are the appropriate bias for a given data set, LMDT finds small accurate trees. ",
+ "neighbors": [
+ 1026,
+ 1028,
+ 1057,
+ 1305
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 59,
+ "label": 4,
+ "text": "Title: NEUROCONTROL BY REINFORCEMENT LEARNING \nAbstract: Reinforcement learning (RL) is a model-free tuning and adaptation method for control of dynamic systems. Contrary to supervised learning, based usually on gradient descent techniques, RL does not require any model or sensitivity function of the process. Hence, RL can be applied to systems that are poorly understood, uncertain, nonlinear or for other reasons untractable with conventional methods. In reinforcement learning, the overall controller performance is evaluated by a scalar measure, called reinforcement. Depending on the type of the control task, reinforcement may represent an evaluation of the most recent control action or, more often, of an entire sequence of past control moves. In the latter case, the RL system learns how to predict the outcome of each individual control action. This prediction is then used to adjust the parameters of the controller. The mathematical background of RL is closely related to optimal control and dynamic programming. This paper gives a comprehensive overview of the RL methods and presents an application to the attitude control of a satellite. Some well known applications from the literature are reviewed as well. ",
+ "neighbors": [
+ 169,
+ 263,
+ 269,
+ 327
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 60,
+ "label": 3,
+ "text": "Title: Computing upper and lower bounds on likelihoods in intractable networks \nAbstract: We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experi ments.",
+ "neighbors": [
+ 61,
+ 143,
+ 723,
+ 1046
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 61,
+ "label": 3,
+ "text": "Title: Recursive algorithms for approximating probabilities in graphical models \nAbstract: MIT Computational Cognitive Science Technical Report 9604 Abstract We develop a recursive node-elimination formalism for efficiently approximating large probabilistic networks. No constraints are set on the network topologies. Yet the formalism can be straightforwardly integrated with exact methods whenever they are/become applicable. The approximations we use are controlled: they maintain consistently upper and lower bounds on the desired quantities at all times. We show that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework. The accuracy of the methods is verified exper imentally.",
+ "neighbors": [
+ 60,
+ 143,
+ 176,
+ 723
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 62,
+ "label": 6,
+ "text": "Title: A General Lower Bound on the Number of Examples Needed for Learning \nAbstract: We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. ",
+ "neighbors": [
+ 94,
+ 259,
+ 280,
+ 306,
+ 371,
+ 392,
+ 452,
+ 513,
+ 556,
+ 927,
+ 1023,
+ 1094
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 63,
+ "label": 2,
+ "text": "Title: Data Exploration Using Self-Organizing Maps \nAbstract: We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. ",
+ "neighbors": [
+ 399,
+ 430,
+ 432
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 64,
+ "label": 2,
+ "text": "Title: Interpretable Neural Networks with BP-SOM \nAbstract: Interpretation of models induced by artificial neural networks is often a difficult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, bp-som, that offers possibilities to overcome this difficulty. It is shown that networks trained with bp-som show interesting regularities, in that hidden-unit activations become restricted to discrete values, and that the som part can be exploited for automatic rule extraction.",
+ "neighbors": [
+ 332,
+ 365,
+ 432,
+ 510
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 65,
+ "label": 6,
+ "text": "Title: A Generalization of Sauer's Lemma \nAbstract: The discrimination powers of Multilayer perceptron (MLP) and Learning Vector Quantisation (LVQ) networks are compared for overlapping Gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the the fact that the MLP uses hyper-planes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1=M, where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds. ",
+ "neighbors": [
+ 94
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 66,
+ "label": 0,
+ "text": "Title: Rate of Convergence of the Gibbs Sampler by Gaussian Approximation SUMMARY \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
+ "neighbors": [
+ 50,
+ 1248
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 67,
+ "label": 3,
+ "text": "Title: How Many Clusters? Which Clustering Method? Answers Via Model-Based Cluster Analysis 1 \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
+ "neighbors": [
+ 85,
+ 254,
+ 293
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 68,
+ "label": 1,
+ "text": "Title: Genetic Programming Exploratory Power and the Discovery of Functions \nAbstract: Hierarchical genetic programming (HGP) approaches rely on the discovery, modification, and use of new functions to accelerate evolution. This paper provides a qualitative explanation of the improved behavior of HGP, based on an analysis of the evolution process from the dual perspective of diversity and causality. From a static point of view, the use of an HGP approach enables the manipulation of a population of higher diversity programs. Higher diversity increases the exploratory ability of the genetic search process, as demonstrated by theoretical and experimental fitness distributions and expanded structural complexity of individuals. From a dynamic point of view, an analysis of the causality of the crossover operator suggests that HGP discovers and exploits useful structures in a bottom-up, hierarchical manner. Diversity and causality are complementary, affecting exploration and exploitation in genetic search. Unlike other machine learning techniques that need extra machinery to control the tradeoff between them, HGP automatically trades off exploration and exploitation. ",
+ "neighbors": [
+ 106,
+ 667,
+ 1145
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 69,
+ "label": 2,
+ "text": "Title: Self-Organization and Functional Role of Lateral Connections and Multisize Receptive Fields in the Primary Visual Cortex \nAbstract: Cells in the visual cortex are selective not only to ocular dominance and orientation of the input, but also to its size and spatial frequency. The simulations reported in this paper show how size selectivity could develop through Hebbian self-organization, and how receptive fields of different sizes could organize into columns like those for orientation and ocular dominance. The lateral connections in the network self-organize cooperatively and simultaneously with the receptive field sizes, and produce patterns of lateral connectivity that closely follow the receptive field organization. Together with our previous work on ocular dominance and orientation selectivity, these results suggest that a single Hebbian self-organizing process can give rise to all the major receptive field properties in the visual cortex, and also to structured patterns of lateral interactions, some of which have been verified experimentally and others predicted by the model. The model also suggests a functional role for the self-organized structures: The afferent receptive fields develop a sparse coding of the visual input, and the recurrent lateral interactions eliminate redundancies in cortical activity patterns, allowing the cortex to efficiently process massive amounts of visual information. ",
+ "neighbors": [
+ 6,
+ 430,
+ 432,
+ 1167
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 70,
+ "label": 1,
+ "text": "Title: Evolving Networks: Using the Genetic Algorithm with Connectionist Learning \nAbstract: A pilot study is described on the practical application of artificial neural networks. The limit cycle of the attitude control of a satellite is selected as the test case. One of the sources of the limit cycle is a position dependent error in the observed attitude. A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'. In our approach, the estimated cost function is directly represented as a function of the parameters of a linear controller. The critic is implemented as a CMAC network. Results from simulations show that the method is able to find optimal parameters without unstable behaviour. In particular in the case of large discontinuities in the attitude measurements, the method shows a clear improvement compared to the conventional approach: the RMS attitude error decreases approximately 30%. ",
+ "neighbors": [
+ 4,
+ 10,
+ 91,
+ 106,
+ 308,
+ 675,
+ 788,
+ 955,
+ 1138,
+ 1161,
+ 1222,
+ 1254
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 71,
+ "label": 3,
+ "text": "Title: The Expectation-Maximization Algorithm for MAP Estimation \nAbstract: The Expectation-Maximization algorithm given by Dempster et al (1977) has enjoyed considerable popularity for solving MAP estimation problems. This note gives a simple derivation of the algorithm, due to Luttrell (1994), that better illustrates the convergence properties of the algorithm and its variants. The algorithm is illustrated with two examples: pooling data from multiple noisy sources and fitting a mixture density.",
+ "neighbors": [
+ 42
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 72,
+ "label": 5,
+ "text": "Title: Theory Refinement Combining Analytical and Empirical Methods \nAbstract: This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. ",
+ "neighbors": [
+ 52,
+ 88,
+ 624,
+ 708,
+ 771,
+ 790,
+ 823,
+ 858,
+ 974,
+ 1290,
+ 1303
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 73,
+ "label": 3,
+ "text": "Title: Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications \nAbstract: Suppose one wishes to sample from the density (x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution (ujx) can be defined, giving the joint distribution (x; u) = (x)(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the Swendsen-Wang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized Swendsen-Wang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viable method of posterior exploration.",
+ "neighbors": [
+ 74,
+ 235,
+ 433
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 74,
+ "label": 3,
+ "text": "Title: Convergence properties of perturbed Markov chains \nAbstract: Acknowledgements. We thank Neal Madras, Radford Neal, Peter Rosenthal, and Richard Tweedie for helpful conversations. This work was partially supported by EPSRC of the U.K., and by NSERC of Canada. ",
+ "neighbors": [
+ 73,
+ 235,
+ 433,
+ 518,
+ 947,
+ 949
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 75,
+ "label": 1,
+ "text": "Title: Hierarchical Self-Organization in Genetic Programming \nAbstract: This paper presents an approach to automatic discovery of functions in Genetic Programming. The approach is based on discovery of useful building blocks by analyzing the evolution trace, generalizing blocks to define new functions, and finally adapting the problem representation on-the-fly. Adaptating the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Measures of complexity of solution trees are defined for an adaptive representation framework. The minimum description length principle is applied to justify the feasibility of approaches based on a hierarchy of discovered functions and to suggest alternative ways of defining a problem's fitness function. Preliminary empirical results are presented.",
+ "neighbors": [
+ 91,
+ 106,
+ 667
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 76,
+ "label": 2,
+ "text": "Title: PATTERN RECOGNITION VIA LINEAR PROGRAMMING THEORY AND APPLICATION TO MEDICAL DIAGNOSIS \nAbstract: A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NP-complete. Another nonconvex model that employs an 1 norm instead of the 2-norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually small) dimensionality of the pattern space. An effective LP-based finite algorithm is proposed for solving the latter model. The algorithm is employed to obtain a noncon-vex piecewise-linear function for separating points representing measurements made on fine needle aspirates taken from benign and malignant human breasts. A computer program trained on 369 samples has correctly diagnosed each of 45 new samples encountered and is currently in use at the University of Wisconsin Hospitals. 1. Introduction. The fundamental problem we wish to address is that of ",
+ "neighbors": [
+ 127,
+ 129,
+ 720,
+ 737
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 77,
+ "label": 2,
+ "text": "Title: The Observers Paradox: Apparent Computational Complexity in Physical Systems \nAbstract: Many connectionist approaches to musical expectancy and music composition let the question of What next? overshadow the equally important question of When next?. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listeners internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. ",
+ "neighbors": [
+ 106,
+ 249
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 78,
+ "label": 1,
+ "text": "Title: LIBGA: A USER-FRIENDLY WORKBENCH FOR ORDER-BASED GENETIC ALGORITHM RESEARCH \nAbstract: Over the years there has been several packages developed that provide a workbench for genetic algorithm (GA) research. Most of these packages use the generational model inspired by GENESIS. A few have adopted the steady-state model used in Genitor. Unfortunately, they have some deficiencies when working with order-based problems such as packing, routing, and scheduling. This paper describes LibGA, which was developed specifically for order-based problems, but which also works easily with other kinds of problems. It offers an easy to use `user-friendly' interface and allows comparisons to be made between both generational and steady-state genetic algorithms for a particular problem. It includes a variety of genetic operators for reproduction, crossover, and mutation. LibGA makes it easy to use these operators in new ways for particular applications or to develop and include new operators. Finally, it offers the unique new feature of a dynamic generation gap. ",
+ "neighbors": [
+ 91,
+ 683,
+ 687,
+ 851,
+ 1195
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 79,
+ "label": 2,
+ "text": "Title: Convergence-Zone Episodic Memory: Analysis and Simulations \nAbstract: Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, long-term storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of binding-layer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergence-zone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected to the perceptual maps. ",
+ "neighbors": [
+ 240,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 80,
+ "label": 4,
+ "text": "Title: Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm \nAbstract: In this paper, we adopt general-sum stochastic games as a framework for multiagent reinforcement learning. Our work extends previous work by Littman on zero-sum stochastic games to a broader framework. We design a multiagent Q-learning method under this framework, and prove that it converges to a Nash equilibrium under specified conditions. This algorithm is useful for finding the optimal strategy when there exists a unique Nash equilibrium in the game. When there exist multiple Nash equilibria in the game, this algorithm should be combined with other learning techniques to find optimal strategies.",
+ "neighbors": [
+ 120,
+ 260,
+ 382,
+ 920,
+ 939
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 81,
+ "label": 6,
+ "text": "Title: Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach \nAbstract: Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a fine-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theory-guided learning systems reveals specific strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction.",
+ "neighbors": [
+ 208,
+ 892
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 82,
+ "label": 2,
+ "text": "Title: Replicability of Neural Computing Experiments \nAbstract: If an experiment requires statistical analysis to establish a result, then one should do a better experiment. Ernest Rutherford, 1930 Most proponents of cold fusion reporting excess heat from their electrolysis experiments were claiming that one of the main characteristics of cold fusion was its irreproducibility | J.R. Huizenga, Cold Fusion, 1993, p. 78 Abstract Amid the ever increasing research into various aspects of neural computing, much progress is evident both from theoretical advances and from empirical studies. On the empirical side a wealth of data from experimental studies is being reported. It is, however, not clear how best to report neural computing experiments such that they may be replicated by other interested researchers. In particular, the nature of iterative learning on a randomised initial architecture, such as backpropagation training of a multilayer perceptron, is such that precise replication of a reported result is virtually impossible. The outcome is that experimental replication of reported results, a touchstone of \"the scientific method\", is not an option for researchers in this most popular subfield of neural computing. In this paper, we address this issue of replicability of experiments based on backpropagation training of multilayer perceptrons (although many of our results will be applicable to any other subfield that is plagued by the same characteristics). First, we attempt to produce a complete abstract specification of such a neural computing experiment. From this specification we identify the full range of parameters needed to support maximum replicability, and we use it to show why absolute replicability is not an option in practice. We propose a statistical framework to support replicability. We demonstrate this framework with some empirical studies of our own on both repli-cability with respect to experimental controls, and validity of implementations of the backpropagation algorithm. Finally, we suggest how the degree of replicability of a neural computing experiment can be estimated and reflected in the claimed precision for any empirical results reported. ",
+ "neighbors": [
+ 4,
+ 779
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 83,
+ "label": 2,
+ "text": "Title: Living in a partially structured environment: How to bypass the limitations of classical reinforcement techniques \nAbstract: In this paper, we propose an unsupervised neural network allowing a robot to learn sensori-motor associations with a delayed reward. The robot task is to learn the \"meaning\" of pictograms in order to \"survive\" in a maze. First, we introduce a new neural conditioning rule (PCR: Probabilistic Conditioning Rule) allowing to test hypotheses (associations between visual categories and movements) during a given time span. Second, we describe a real maze experiment with our mobile robot. We propose a neural architecture to solve this problem and we discuss the difficulty to build visual categories dynamically while associating them to movements. Third, we propose to use our algorithm on a simulation in order to test it exhaustively. We give the results for different kind of mazes and we compare our system to an adapted version of the Q-learning algorithm. Finally, we conclude by showing the limitations of approaches that do not take into account the intrinsic complexity of a reasonning based on image recognition. ",
+ "neighbors": [
+ 169,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 84,
+ "label": 2,
+ "text": "Title: Data-driven Modeling and Synthesis of Acoustical Instruments \nAbstract: We present a framework for the analysis and synthesis of acoustical instruments based on data-driven probabilistic inference modeling. Audio time series and boundary conditions of a played instrument are recorded and the non-linear mapping from the control data into the audio space is inferred using the general inference framework of Cluster-Weighted Modeling. The resulting model is used for real-time synthesis of audio sequences from new input data.",
+ "neighbors": [
+ 40
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 85,
+ "label": 3,
+ "text": "Title: Inference in Model-Based Cluster Analysis \nAbstract: Technical Report no. 285 Department of Statistics University of Washington. March 10, 1995 ",
+ "neighbors": [
+ 47,
+ 67,
+ 293
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 86,
+ "label": 6,
+ "text": "Title: A Practical Bayesian Framework for Backprop Networks \nAbstract: A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability and the Bayesian evidence is obtained. This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). ",
+ "neighbors": [
+ 43,
+ 100,
+ 122,
+ 215,
+ 323,
+ 444,
+ 534,
+ 561,
+ 580,
+ 752,
+ 951,
+ 1078
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 87,
+ "label": 5,
+ "text": "Title: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor \nAbstract: Simultaneous multithreading is a technique that permits multiple independent threads to issue multiple instructions each cycle. In previous work we demonstrated the performance potential of simultaneous multithreading, based on a somewhat idealized model. In this paper we show that the throughput gains from simultaneous multithreading can be achieved without extensive changes to a conventional wide-issue superscalar, either in hardware structures or sizes. We present an architecture for simultaneous multithreading that achieves three goals: (1) it minimizes the architectural impact on the conventional superscalar design, (2) it has minimal performance impact on a single thread executing alone, and (3) it achieves significant throughput gains when running multiple threads. Our simultaneous multithreading architecture achieves a throughput of 5.4 instructions per cycle, a 2.5-fold improvement over an unmodified superscalar with similar hardware resources. This speedup is enhanced by an advantage of multithreading previously unexploited in other architectures: the ability to favor for fetch and issue those threads most efficiently using the processor each cycle, thereby providing the best instructions to the processor. ",
+ "neighbors": [
+ 103,
+ 349,
+ 410
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 88,
+ "label": 6,
+ "text": "Title: Bias-Driven Revision of Logical Domain Theories \nAbstract: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``ow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair awed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.",
+ "neighbors": [
+ 72,
+ 1098,
+ 1290,
+ 1339
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 89,
+ "label": 2,
+ "text": "Title: EVALUATION OF GAUSSIAN PROCESSES AND OTHER METHODS FOR NON-LINEAR REGRESSION \nAbstract: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``ow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair awed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.",
+ "neighbors": [
+ 188,
+ 267,
+ 1344
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 90,
+ "label": 3,
+ "text": "Title: On Bayesian analysis of mixtures with an unknown number of components Summary \nAbstract: New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods, that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context.",
+ "neighbors": [
+ 397,
+ 414,
+ 441,
+ 569,
+ 648
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 91,
+ "label": 1,
+ "text": "Title: 4 Implementing Application Specific Routines Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley. \nAbstract: To implement a specific application, you should only have to change the file app.c. Section 2 describes the routines in app.c in detail. If you use additional variables for your specific problem, the easiest method of making them available to other program units is to declare them in sga.h and external.h. However, take care that you do not redeclare existing variables. Two example applications files are included in the SGA-C distribution. The file app1.c performs the simple example problem included with the Pascal version; finding the maximum of x 10 , where x is an integer interpretation of a chromosome. A slightly more complex application is include in app2.c. This application illustrates two features that have been added to SGA-C. The first of these is the ithruj2int function, which converts bits i through j in a chromosome to an integer. The second new feature is the utility pointer that is associated with each population member. The example application interprets each chromosome as a set of concatenated integers in binary form. The lengths of these integer fields is determined by the user-specified value of field size, which is read in by the function app data(). The field size must be less than the smallest of the chromosome length and the length of an unsigned integer. An integer array for storing the interpreted form of each chromosome is dynamically allocated and assigned to the chromosome's utility pointer in app malloc(). The ithruj2int routine (see utility.c) is used to translate each chromosome into its associated vector. The fitness for each chromosome is simply the sum of the squares of these integers. This example application will function for any chromosome length. SGA-C is intended to be a simple program for first-time GA experimentation. It is not intended to be definitive in terms of its efficiency or the grace of its implementation. The authors are interested in the comments, criticisms, and bug reports from SGA-C users, so that the code can be refined for easier use in subsequent versions. Please email your comments to rob@galab2.mh.ua.edu, or write to TCGA: The authors gratefully acknowledge support provided by NASA under Grant NGT-50224 and support provided by the National Science Foundation under Grant CTS-8451610. We also thank Hillol Kargupta for donating his tournament selection implementation. Booker, L. B. (1982). Intelligent behavior as an adaptation to the task environment (Doctoral dissertation, Technical Report No. 243. Ann Arbor: University of Michigan, Logic of Computers Group). Dissertations Abstracts International, 43(2), 469B. (University Microfilms No. 8214966) ",
+ "neighbors": [
+ 10,
+ 22,
+ 28,
+ 70,
+ 75,
+ 78,
+ 96,
+ 106,
+ 107,
+ 108,
+ 123,
+ 134,
+ 168,
+ 218,
+ 224,
+ 228,
+ 234,
+ 237,
+ 300,
+ 325,
+ 351,
+ 365,
+ 383,
+ 415,
+ 416,
+ 419,
+ 428,
+ 429,
+ 439,
+ 454,
+ 462,
+ 499,
+ 529,
+ 555,
+ 563,
+ 590,
+ 603,
+ 607,
+ 610,
+ 611,
+ 622,
+ 627,
+ 629,
+ 632,
+ 640,
+ 643,
+ 645,
+ 652,
+ 654,
+ 664,
+ 676,
+ 677,
+ 683,
+ 687,
+ 688,
+ 704,
+ 707,
+ 715,
+ 717,
+ 731,
+ 732,
+ 745,
+ 746,
+ 766,
+ 777,
+ 786,
+ 809,
+ 817,
+ 834,
+ 843,
+ 851,
+ 856,
+ 862,
+ 876,
+ 877,
+ 878,
+ 880,
+ 902,
+ 937,
+ 940,
+ 941,
+ 948,
+ 950,
+ 955,
+ 958,
+ 978,
+ 986,
+ 1014,
+ 1060,
+ 1089,
+ 1143,
+ 1145,
+ 1151,
+ 1153,
+ 1154,
+ 1156,
+ 1177,
+ 1194,
+ 1195,
+ 1221,
+ 1222,
+ 1236,
+ 1254,
+ 1295,
+ 1296,
+ 1312,
+ 1314,
+ 1334,
+ 1338
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 92,
+ "label": 4,
+ "text": "Title: Auto-exploratory Average Reward Reinforcement Learning \nAbstract: We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this \"Auto-exploratory H-learning\" performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration. ",
+ "neighbors": [
+ 318,
+ 320,
+ 322,
+ 811
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 93,
+ "label": 1,
+ "text": "Title: Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques \nAbstract: This paper proposes using fuzzy logic techniques to dynamically control parameter settings of genetic algorithms (GAs). We describe the Dynamic Parametric GA: a GA that uses a fuzzy knowledge-based system to control GA parameters. We then introduce a technique for automatically designing and tuning the fuzzy knowledge-base system using GAs. Results from initial experiments show a performance improvement over a simple static GA. One Dynamic Parametric GA system designed by our automatic method demonstrated improvement on an application not included in the design phase, which may indicate the general applicability of the Dynamic Parametric GA to a wide range of ap plications.",
+ "neighbors": [
+ 955,
+ 967,
+ 1314
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 94,
+ "label": 6,
+ "text": "Title: Characterizations of Learnability for Classes of f0; ng-valued Functions \nAbstract: We study layered belief networks of binary random variables in which the conditional probabilities Pr[childjparents] depend monotonically on weighted sums of the parents. For these networks, we give efficient algorithms for computing rigorous bounds on the marginal probabilities of evidence at the output layer. Our methods apply generally to the computation of both upper and lower bounds, as well as to generic transfer function parameterizations of the conditional probability tables (such as sigmoid and noisy-OR). We also prove rates of convergence of the accuracy of our bounds as a function of network size. Our results are derived by applying the theory of large deviations to the weighted sums of parents at each node in the network. Bounds on the marginal probabilities are computed from two contributions: one assuming that these weighted sums fall near their mean values, and the other assuming that they do not. This gives rise to an interesting trade-off between probable explanations of the evidence and improbable deviations from the mean. In networks where each child has N parents, the gap between our upper and lower bounds behaves as a sum of two terms, one of order p In addition to providing such rates of convergence for large networks, our methods also yield efficient algorithms for approximate inference in fixed networks. ",
+ "neighbors": [
+ 62,
+ 65
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 95,
+ "label": 4,
+ "text": "Title: An Upper Bound on the Loss from Approximate Optimal-Value Functions \nAbstract: Many reinforcement learning (RL) approaches can be formulated from the theory of Markov decision processes and the associated method of dynamic programming (DP). The value of this theoretical understanding, however, is tempered by many practical concerns. One important question is whether DP-based approaches that use function approximation rather than lookup tables, can avoid catastrophic effects on performance. This note presents a result in Bertsekas (1987) which guarantees that small errors in the approximation of a task's optimal value function cannot produce arbitrarily bad performance when actions are selected greedily. We derive an upper bound on performance loss which is slightly tighter than that in Bertsekas (1987), and we show the extension of the bound to Q-learning (Watkins, 1989). These results provide a theoretical justification for a practice that is common in reinforcement learning. ",
+ "neighbors": [
+ 169,
+ 318,
+ 327,
+ 328,
+ 334,
+ 776,
+ 1269
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 96,
+ "label": 2,
+ "text": "Title: Symbolic and Subsymbolic Learning for Vision: Some Possibilities \nAbstract: Robust, flexible and sufficiently general vision systems such as those for recognition and description of complex 3-dimensional objects require an adequate armamentarium of representations and learning mechanisms. This paper briefly analyzes the strengths and weaknesses of different learning paradigms such as symbol processing systems, connectionist networks, and statistical and syntactic pattern recognition systems as possible candidates for providing such capabilities and points out several promising directions for integrating multiple such paradigms in a synergistic fashion towards that goal. ",
+ "neighbors": [
+ 91,
+ 286,
+ 288
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 97,
+ "label": 5,
+ "text": "Title: Knowledge Integration and Learning \nAbstract: LIACC - Technical Report 91-1 Abstract. In this paper we address the problem of acquiring knowledge by integration . Our aim is to construct an integrated knowledge base from several separate sources. The objective of integration is to construct one system that exploits all the knowledge that is available and has good performance. The aim of this paper is to discuss the methodology of knowledge integration and present some concrete results. In our experiments the performance of the integrated theory exceeded the performance of the individual theories by quite a significant amount. Also, the performance did not fluctuate much when the experiments were repeated. These results indicate knowledge integration can complement other existing ML methods. ",
+ "neighbors": [
+ 438
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 98,
+ "label": 6,
+ "text": "Title: Evaluation and Selection of Biases in Machine Learning \nAbstract: In this introduction, we define the term bias as it is used in machine learning systems. We motivate the importance of automated methods for evaluating and selecting biases using a framework of bias selection as search in bias and meta-bias spaces. Recent research in the field of machine learning bias is summarized. ",
+ "neighbors": [
+ 242,
+ 371,
+ 514,
+ 554
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 99,
+ "label": 2,
+ "text": "Title: REDUCED MEMORY REPRESENTATIONS FOR MUSIC \nAbstract: We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for producing memory representations. ",
+ "neighbors": [
+ 201
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 100,
+ "label": 6,
+ "text": "Title: Ensemble Learning and Evidence Maximization \nAbstract: Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability distribution of the parameters. This tool has now been applied to a variety of statistical inference problems. In this paper I study a linear regression model with both parameters and hyper-parameters. I demonstrate that the evidence approximation for the optimization of regularization constants can be derived in detail from a free energy minimization viewpoint. ",
+ "neighbors": [
+ 42,
+ 86,
+ 385,
+ 444
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 101,
+ "label": 3,
+ "text": "Title: Adaptation for Self Regenerative MCMC SUMMARY \nAbstract: The self regenerative MCMC is a tool for constructing a Markov chain with a given stationary distribution by constructing an auxiliary chain with some other stationary distribution . Elements of the auxiliary chain are picked a suitable random number of times so that the resulting chain has the stationary distribution , Sahu and Zhigljavsky (1998). In this article we provide a generic adaptation scheme for the above algorithm. The adaptive scheme is to use the knowledge of the stationary distribution gathered so far and then to update during the course of the simulation. This method is easy to implement and often leads to considerable improvement. We obtain theoretical results for the adaptive scheme. Our proposed methodology is illustrated with a number of realistic examples in Bayesian computation and its performance is compared with other available MCMC techniques. In one of our applications we develop a non-linear dynamics model for modeling predator-prey relationships in the wild. ",
+ "neighbors": [
+ 266,
+ 281
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 102,
+ "label": 0,
+ "text": "Title: Conceptual Analogy \nAbstract: Conceptual analogy (CA) is an approach that integrates conceptualization, i.e., memory organization based on prior experiences and analogical reasoning (Borner 1994a). It was implemented prototypically and tested to support the design process in building engineering (Borner and Janetzko 1995, Borner 1995). There are a number of features that distinguish CA from standard approaches to CBR and AR. First of all, CA automatically extracts the knowledge needed to support design tasks (i.e., complex case representations, the relevance of object features and relations, and proper adaptations) from attribute-value representations of prior layouts. Secondly, it effectively determines the similarity of complex case representations in terms of adaptability. Thirdly, implemented and integrated into a highly interactive and adaptive system architecture it allows for incremental knowledge acquisition and user support. This paper surveys the basic assumptions and the psychological results which influenced the development of CA. It sketches the knowledge representation formalisms employed and characterizes the sub-processes needed to integrate memory organization and analogical reasoning. ",
+ "neighbors": [
+ 37,
+ 256,
+ 309,
+ 311
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 103,
+ "label": 5,
+ "text": "Title: Multipath Execution: Opportunities and Limits \nAbstract: Even sophisticated branch-prediction techniques necessarily suffer some mispredictions, and even relatively small mispredict rates hurt performance substantially in current-generation processors. In this paper, we investigate schemes for improving performance in the face of imperfect branch predictors by having the processor simultaneously execute code from both the taken and not-taken outcomes of a branch. This paper presents data regarding the limits of multipath execution, considers fetch-bandwidth needs for multipath execution, and discusses various dynamic confidence-prediction schemes that gauge the likelihood of branch mispredictions. Our evaluations consider executing along several (28) paths at once. Using 4 paths and a relatively simple confidence predictor, multipath execution garners speedups of up to 30% compared to the single-path case, with an average speedup of 14.4% for the SPECint suite. While associated increases in instruction-fetch-bandwidth requirements are not too surprising, a less expected result is the significance of having a separate return-address stack for each forked path. Overall, our results indicate that multipath execution offers significant improvements over single-path performance, and could be especially useful when combined with multithreading so that hardware costs can be amortized over both approaches. ",
+ "neighbors": [
+ 87,
+ 243
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 104,
+ "label": 4,
+ "text": "Title: Adaptive state space quantisation: adding and removing neurons \nAbstract: This paper describes a self-learning control system for a mobile robot. Based on local sensor data, a robot is taught to avoid collisions with obstacles. The only feedback to the control system is a binary-valued external reinforcement signal, which indicates whether or not a collision has occured. A reinforcement learning scheme is used to find a correct mapping from input (sensor) space to output (steering signal) space. An adaptive quantisation scheme is introduced, through which the discrete division of input space is built up from scratch by the system itself. ",
+ "neighbors": [
+ 169,
+ 328,
+ 344,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 105,
+ "label": 2,
+ "text": "Title: Evaluation and Ordering of Rules Extracted from Feedforward Networks \nAbstract: Rules extracted from trained feedforward networks can be used for explanation, validation, and cross-referencing of network output decisions. This paper introduces a rule evaluation and ordering mechanism that orders rules extracted from feedforward networks based on three performance measures. Detailed experiments using three rule extraction techniques as applied to the Wisconsin breast cancer database, illustrate the power of the proposed methods. Moreover, a method of integrating the output decisions of both the extracted rule-based system and the corresponding trained network is proposed. The integrated system provides further improvements. ",
+ "neighbors": [
+ 261
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 106,
+ "label": 1,
+ "text": "Title: Coevolving High-Level Representations \nAbstract: Rules extracted from trained feedforward networks can be used for explanation, validation, and cross-referencing of network output decisions. This paper introduces a rule evaluation and ordering mechanism that orders rules extracted from feedforward networks based on three performance measures. Detailed experiments using three rule extraction techniques as applied to the Wisconsin breast cancer database, illustrate the power of the proposed methods. Moreover, a method of integrating the output decisions of both the extracted rule-based system and the corresponding trained network is proposed. The integrated system provides further improvements. ",
+ "neighbors": [
+ 22,
+ 68,
+ 70,
+ 75,
+ 77,
+ 91,
+ 107,
+ 218,
+ 234,
+ 300,
+ 416,
+ 437,
+ 439
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 107,
+ "label": 1,
+ "text": "Title: An Evolutionary Algorithm that Constructs Recurrent Neural Networks \nAbstract: Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, which includes genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. This algorithms empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. ",
+ "neighbors": [
+ 22,
+ 91,
+ 106,
+ 224,
+ 1337
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 108,
+ "label": 1,
+ "text": "Title: USING MARKER-BASED GENETIC ENCODING OF NEURAL NETWORKS TO EVOLVE FINITE-STATE BEHAVIOUR \nAbstract: A new mechanism for genetic encoding of neural networks is proposed, which is loosely based on the marker structure of biological DNA. The mechanism allows all aspects of the network structure, including the number of nodes and their connectivity, to be evolved through genetic algorithms. The effectiveness of the encoding scheme is demonstrated in an object recognition task that requires artificial creatures (whose behaviour is driven by a neural network) to develop high-level finite-state exploration and discrimination strategies. The task requires solving the sensory-motor grounding problem, i.e. developing a functional understanding of the effects that a creature's movement has on its sensory input. ",
+ "neighbors": [
+ 10,
+ 91,
+ 169,
+ 224
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 109,
+ "label": 2,
+ "text": "Title: Soft Classification, a.k.a. Risk Estimation, via Penalized Log Likelihood and Smoothing Spline Analysis of Variance \nAbstract: We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian \"confidence intervals\" for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. ",
+ "neighbors": [
+ 40,
+ 135,
+ 162,
+ 298
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 110,
+ "label": 5,
+ "text": "Title: d d Techniques for Extracting Instruction Level Parallelism on MIMD Architectures \nAbstract: Extensive research has been done on extracting parallelism from single instruction stream processors. This paper presents some results of our investigation into ways to modify MIMD architectures to allow them to extract the instruction level parallelism achieved by current superscalar and VLIW machines. A new architecture is proposed which utilizes the advantages of a multiple instruction stream design while addressing some of the limitations that have prevented MIMD architectures from performing ILP operation. A new code scheduling mechanism is described to support this new architecture by partitioning instructions across multiple processing elements in order to exploit this level of parallelism. ",
+ "neighbors": [
+ 111,
+ 410,
+ 423
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 111,
+ "label": 5,
+ "text": "Title: d d MISC: A Multiple Instruction Stream Computer \nAbstract: This paper describes a single chip Multiple Instruction Stream Computer (MISC) capable of extracting instruction level parallelism from a broad spectrum of programs. The MISC architecture uses multiple asynchronous processing elements to separate a program into streams that can be executed in parallel, and integrates a conflict-free message passing system into the lowest level of the processor design to facilitate low latency intra-MISC communication. This approach allows for increased machine parallelism with minimal code expansion, and provides an alternative approach to single instruction stream multi-issue machines such as SuperScalar and VLIW. ",
+ "neighbors": [
+ 110,
+ 410
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 112,
+ "label": 2,
+ "text": "Title: Sample Complexity for Learning Recurrent Perceptron Mappings \nAbstract: Recurrent perceptron classifiers generalize the classical perceptron model. They take into account those correlations and dependences among input coordinates which arise from linear digital filtering. This paper provides tight bounds on sample complexity associated to the fitting of such models to experimental data. ",
+ "neighbors": [
+ 307,
+ 815,
+ 1025
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 113,
+ "label": 2,
+ "text": "Title: Neural net architectures for temporal sequence processing \nAbstract: I present a general taxonomy of neural net architectures for processing time-varying patterns. This taxonomy subsumes many existing architectures in the literature, and points to several promising architectures that have yet to be examined. Any architecture that processes time-varying patterns requires two conceptually distinct components: a short-term memory that holds on to relevant past events and an associator that uses the short-term memory to classify or predict. My taxonomy is based on a characterization of short-term memory models along the dimensions of form, content, and adaptability. Experiments on predicting future values of a financial time series (US dollar-Swiss franc exchange rates) are presented using several alternative memory models. The results of these experiments serve as a baseline against which more sophisticated architectures can be compared. Neural networks have proven to be a promising alternative to traditional techniques for nonlinear temporal prediction tasks (e.g., Curtiss, Brandemuehl, & Kreider, 1992; Lapedes & Farber, 1987; Weigend, Huberman, & Rumelhart, 1992). However, temporal prediction is a particularly challenging problem because conventional neural net architectures and algorithms are not well suited for patterns that vary over time. The prototypical use of neural nets is in structural pattern recognition. In such a task, a collection of features|visual, semantic, or otherwise|is presented to a network and the network must categorize the input feature pattern as belonging to one or more classes. For example, a network might be trained to classify animal species based on a set of attributes describing living creatures such as \"has tail\", \"lives in water\", or \"is carnivorous\"; or a network could be trained to recognize visual patterns over a two-dimensional pixel array as a letter in fA; B; . . . ; Zg. In such tasks, the network is presented with all relevant information simultaneously. In contrast, temporal pattern recognition involves processing of patterns that evolve over time. The appropriate response at a particular point in time depends not only on the current input, but potentially all previous inputs. This is illustrated in Figure 1, which shows the basic framework for a temporal prediction problem. I assume that time is quantized into discrete steps, a sensible assumption because many time series of interest are intrinsically discrete, and continuous series can be sampled at a fixed interval. The input at time t is denoted x(t). For univariate series, this input ",
+ "neighbors": [
+ 201,
+ 240,
+ 951
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 114,
+ "label": 2,
+ "text": "Title: Natural Language Processing with Subsymbolic Neural Networks \nAbstract: ",
+ "neighbors": [
+ 159,
+ 427,
+ 432,
+ 918,
+ 1240
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 115,
+ "label": 2,
+ "text": "Title: Beyond the Cognitive Map: Contributions to a Computational Neuroscience Theory of Rodent Navigation for the\nAbstract: ",
+ "neighbors": [
+ 240,
+ 430,
+ 432
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 116,
+ "label": 2,
+ "text": "Title: NEURAL NETS AS SYSTEMS MODELS AND CONTROLLERS suitability of \"neural nets\" as models for dynamical\nAbstract: This paper briefly surveys some recent results relevant ",
+ "neighbors": [
+ 307,
+ 588,
+ 596,
+ 830,
+ 831,
+ 1025
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 117,
+ "label": 2,
+ "text": "Title: LEARNING BY ERROR-DRIVEN DECOMPOSITION \nAbstract: In this paper we describe a new selforganizing decomposition technique for learning high-dimensional mappings. Problem decomposition is performed in an error-driven manner, such that the resulting subtasks (patches) are equally well approximated. Our method combines an unsupervised learning scheme (Feature Maps [Koh84]) with a nonlinear approximator (Backpropagation [RHW86]). The resulting learning system is more stable and effective in changing environments than plain backpropagation and much more powerful than extended feature maps as proposed by [RS88, RMS89]. Extensions of our method give rise to active exploration strategies for autonomous agents facing unknown environments. The appropriateness of our general purpose method will be demonstrated with an ex ample from mathematical function approximation.",
+ "neighbors": [
+ 400,
+ 432,
+ 856
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 118,
+ "label": 6,
+ "text": "Title: Feature Subset Selection as Search with Probabilistic Estimates \nAbstract: Irrelevant features and weakly relevant features may reduce the comprehensibility and accuracy of concepts induced by supervised learning algorithms. We formulate the search for a feature subset as an abstract search problem with probabilistic estimates. Searching a space using an evaluation function that is a random variable requires trading off accuracy of estimates for increased state exploration. We show how recent feature subset selection algorithms in the machine learning literature fit into this search problem as simple hill climbing approaches, and conduct a small experiment using a best-first search technique. ",
+ "neighbors": [
+ 50,
+ 242,
+ 371,
+ 712,
+ 875,
+ 1211
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 119,
+ "label": 1,
+ "text": "Title: 17 Massively Parallel Genetic Programming \nAbstract: As the field of Genetic Programming (GP) matures and its breadth of application increases, the need for parallel implementations becomes absolutely necessary. The transputer-based system presented in the chapter by Koza and Andre ([11]) is one of the rare such parallel implementations. Until today, no implementation has been proposed for parallel GP using a SIMD architecture, except for a data-parallel approach ([20]), although others have exploited workstation farms and pipelined supercomputers. One reason is certainly the apparent difficulty of dealing with the parallel evaluation of different S-expressions when only a single instruction can be executed at the same time on every processor. The aim of this chapter is to present such an implementation of parallel GP on a SIMD system, where each processor can efficiently evaluate a different S-expression. We have implemented this approach on a MasPar MP-2 computer, and will present some timing results. To the extent that SIMD machines, like the MasPar are available to offer cost-effective cycles for scientific experimentation, this is a useful approach. The idea of simulating a MIMD machine using a SIMD architecture is not new ([8, 15]). One of the original ideas for the Connection Machine ([8]) was that it could simulate other parallel architectures. Indeed, in the extreme, each processor on a SIMD architecture can simulate a universal Turing machine (TM). With different turing machine specifications stored in each local memory, each processor would simply have its own tape, tape head, state table and state pointer, and the simulation would be performed by repeating the basic TM operations simultaneously. Of course, such a simulation would be very inefficient, and difficult to program, but would have the advantage of being really MIMD, where no SIMD processor would be in idle state, until its simulated machine halts. Now let us consider an alternative idea, that each SIMD processor would simulate an individual stored program computer using a simple instruction set. For each step of the simulation, the SIMD system would sequentially execute each possible instruction on the subset of processors whose next instruction match it. For a typical assembly language, even with a reduced instruction set, most processors would be idle most of the time. However, if the set of instructions implemented on the virtual processor is very small, this approach can be fruitful. In the case of Genetic Programming, the \"instruction set\" is composed of the specified set of functions designed for the task. We will show below that with a precompilation step, simply adding a push, a conditional, and unconditional branching and a stop instruction, we can get a very effective MIMD simulation running. This chapter reports such an implementation of GP on a MasPar MP-2 parallel computer. The configuration of our system is composed of 4K processor elements ",
+ "neighbors": [
+ 234,
+ 1206
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 120,
+ "label": 4,
+ "text": "Title: A Unified Analysis of Value-Function-Based Reinforcement-Learning Algorithms \nAbstract: Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the asynchronous convergence of a complex reinforcement-learning algorithm to be proven by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multi-state updates, Q-learning for Markov games, and risk-sensitive reinforcement learning. ",
+ "neighbors": [
+ 80,
+ 170,
+ 318,
+ 426,
+ 811
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 121,
+ "label": 3,
+ "text": "Title: Using Path Diagrams as a Structural Equation Modelling Tool \nAbstract: Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the asynchronous convergence of a complex reinforcement-learning algorithm to be proven by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multi-state updates, Q-learning for Markov games, and risk-sensitive reinforcement learning. ",
+ "neighbors": [
+ 377,
+ 850
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 122,
+ "label": 2,
+ "text": "Title: Bayesian Non-linear Modelling for the Prediction Competition \nAbstract: The 1993 energy prediction competition involved the prediction of a series of building energy loads from a series of environmental input variables. Non-linear regression using `neural networks' is a popular technique for such modeling tasks. Since it is not obvious how large a time-window of inputs is appropriate, or what preprocessing of inputs is best, this can be viewed as a regression problem in which there are many possible input variables, some of which may actually be irrelevant to the prediction of the output variable. Because a finite data set will show random correlations between the irrelevant inputs and the output, any conventional neural network (even with reg-ularisation or `weight decay') will not set the coefficients for these junk inputs to zero. Thus the irrelevant variables will hurt the model's performance. The Automatic Relevance Determination (ARD) model puts a prior over the regression parameters which embodies the concept of relevance. This is done in a simple and `soft' way by introducing multiple regularisation constants, one associated with each input. Using Bayesian methods, the regularisation constants for junk inputs are automatically inferred to be large, preventing those inputs from causing significant overfitting. ",
+ "neighbors": [
+ 43,
+ 86,
+ 267,
+ 343
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 123,
+ "label": 1,
+ "text": "Title: Issues in Evolutionary Robotics \nAbstract: A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. ",
+ "neighbors": [
+ 20,
+ 91,
+ 228,
+ 325,
+ 413,
+ 439,
+ 491,
+ 741,
+ 786,
+ 940,
+ 961,
+ 1295
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 124,
+ "label": 5,
+ "text": "Title: on Inductive Logic Programming (ILP-95) Inducing Logic Programs without Explicit Negative Examples \nAbstract: This paper presents a method for learning logic programs without explicit negative examples by exploiting an assumption of output completeness. A mode declaration is supplied for the target predicate and each training input is assumed to be accompanied by all of its legal outputs. Any other outputs generated by an incomplete program implicitly represent negative examples; however, large numbers of ground negative examples never need to be generated. This method has been incorporated into two ILP systems, Chillin and IFoil, both of which use intensional background knowledge. Tests on two natural language acquisition tasks, case-role mapping and past-tense learning, illustrate the advantages of the approach. ",
+ "neighbors": [
+ 894,
+ 995
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 125,
+ "label": 0,
+ "text": "Title: on Inductive Logic Programming (ILP-95) Inducing Logic Programs without Explicit Negative Examples \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
+ "neighbors": [
+ 50,
+ 1248
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 126,
+ "label": 2,
+ "text": "Title: A Neuro-Fuzzy Approach to Agglomerative Clustering \nAbstract: In this paper, we introduce a new agglomerative clustering algorithm in which each pattern cluster is represented by a collection of fuzzy hyperboxes. Initially, a number of such hyperboxes are calculated to represent the pattern samples. Then, the algorithm applies multi-resolution techniques to progressively \"combine\" these hyperboxes in a hierarchial manner. Such an agglomerative scheme has been found to yield encouraging results in real-world clustering problems. ",
+ "neighbors": [
+ 361
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 127,
+ "label": 6,
+ "text": "Title: Induction of Oblique Decision Trees \nAbstract: This paper introduces a randomized technique for partitioning examples using oblique hyperplanes. Standard decision tree techniques, such as ID3 and its descendants, partition a set of points with axis-parallel hyper-planes. Our method, by contrast, attempts to find hyperplanes at any orientation. The purpose of this more general technique is to find smaller but equally accurate decision trees than those created by other methods. We have tested our algorithm on both real and simulated data, and found that in some cases it produces surprisingly small trees without losing predictive accuracy. Small trees allow us, in turn, to obtain simple qualitative descriptions of each problem domain.",
+ "neighbors": [
+ 76,
+ 217,
+ 245,
+ 273,
+ 374,
+ 737
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 128,
+ "label": 1,
+ "text": "Title: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm \nAbstract: This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search.",
+ "neighbors": [
+ 151,
+ 522,
+ 1320
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 129,
+ "label": 2,
+ "text": "Title: Mathematical Programming in Neural Networks \nAbstract: This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard description in terms of a mean square error in the output space is also given, which leads to the use of unconstrained minimization techniques for training a neural network. The linear programming approach is demonstrated by a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility.",
+ "neighbors": [
+ 76,
+ 240,
+ 720,
+ 721
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 130,
+ "label": 0,
+ "text": "Title: Understanding Creativity: A Case-Based Approach \nAbstract: Dissatisfaction with existing standard case-based reasoning (CBR) systems has prompted us to investigate how we can make these systems more creative and, more broadly, what would it mean for them to be more creative. This paper discusses three research goals: understanding creative processes better, investigating the role of cases and CBR in creative problem solving, and understanding the framework that supports this more interesting kind of case-based reasoning. In addition, it discusses methodological issues in the study of creativity and, in particular, the use of CBR as a research paradigm for exploring creativity.",
+ "neighbors": [
+ 15,
+ 278
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 131,
+ "label": 2,
+ "text": "Title: Stochastic Decomposition of DNA Sequences Using Hidden Markov Models \nAbstract: This work presents an application of a machine learning for characterizing an important property of natural DNA sequences compositional inhomogeneity. Compositional segments often correspond to meaningful biological units. Taking into account such inhomogeneity is a prerequisite of successful recognition of functional features in DNA sequences, especially, protein-coding genes. Here we present a technique for DNA segmentation using hidden Markov models. A DNA sequence is represented by a chain of homogeneous segments, each described by one of a few statistically discriminated hidden states, whose contents form a first-order Markov chain. The technique is used to describe and compare chromosomes I and IV of the completely sequenced Saccharomyces cerevisiae (yeast) genome. Our results indicate the existence of a few well separated states, which gives support to the isochore theory. We also explore the model's likelihood landscape and analyze the dynamics of the optimization process, thus addressing the problem of reliability of the obtained optima and efficiency of the algorithms. ",
+ "neighbors": [
+ 3,
+ 156,
+ 358
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 132,
+ "label": 2,
+ "text": "Title: Multiassociative Memory \nAbstract: This paper discusses the problem of how to implement many-to-many, or multi-associative, mappings within connectionist models. Traditional symbolic approaches wield explicit representation of all alternatives via stored links, or implicitly through enumerative algorithms. Classical pattern association models ignore the issue of generating multiple outputs for a single input pattern, and while recent research on recurrent networks is promising, the field has not clearly focused upon multi-associativity as a goal. In this paper, we define multiassociative memory MM, and several possible variants, and discuss its utility in general cognitive modeling. We extend sequential cascaded networks (Pollack 1987, 1990a) to fit the task, and perform several ini tial experiments which demonstrate the feasibility of the concept. This paper appears in The Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society. August 7-10, 1991. ",
+ "neighbors": [
+ 4,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 133,
+ "label": 0,
+ "text": "Title: Machine Learning Methods for International Conflict Databases: A Case Study in Predicting Mediation Outcome \nAbstract: This paper tries to identify rules and factors that are predictive for the outcome of international conflict management attempts. We use C4.5, an advanced Machine Learning algorithm, for generating decision trees and prediction rules from cases in the CONFMAN database. The results show that simple patterns and rules are often not only more understandable, but also more reliable than complex rules. Simple decision trees are able to improve the chances of correctly predicting the outcome of a conflict management attempt. This suggests that mediation is more repetitive than conflicts per se, where such results have not been achieved so far. ",
+ "neighbors": [
+ 242,
+ 712,
+ 904
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 134,
+ "label": 1,
+ "text": "Title: A Sequential Niche Technique for Multimodal Function Optimization \nAbstract: c fl UWCC COMMA Technical Report No. 93001, February 1993 x No part of this article may be reproduced for commercial purposes. Abstract A technique is described which allows unimodal function optimization methods to be extended to efficiently locate all optima of multimodal problems. We describe an algorithm based on a traditional genetic algorithm (GA). This involves iterating the GA, but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found. This is achieved by applying a fitness derating function to the raw fitness function, so that fitness values are depressed in the regions of the problem space where solutions have already been found. Consequently, the likelihood of discovering a new solution on each iteration is dramatically increased. The technique may be used with various styles of GA, or with other optimization methods, such as simulated annealing. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The technique is at least as fast as fitness sharing methods. It provides a speedup of between 1 and 10p on a problem with p optima, depending on the value of p and the convergence time complexity. ",
+ "neighbors": [
+ 91,
+ 603
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 135,
+ "label": 2,
+ "text": "Title: Learning from Examples, Agent Teams and the Concept of Reflection \nAbstract: In International Journal of Pattern Recognition and AI, 10(3):251-272, 1996 Also available as GMD report #766 ",
+ "neighbors": [
+ 24,
+ 109,
+ 174,
+ 993
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 136,
+ "label": 4,
+ "text": "Title: Robust Value Function Approximation by Working Backwards Computing an accurate value function is the key\nAbstract: In this paper, we examine the intuition that TD() is meant to operate by approximating asynchronous value iteration. We note that on the important class of discrete acyclic stochastic tasks, value iteration is inefficient compared with the DAG-SP algorithm, which essentially performs only one sweep instead of many by working backwards from the goal. The question we address in this paper is whether there is an analogous algorithm that can be used in large stochastic state spaces requiring function approximation. We present such an algorithm, analyze it, and give comparative results to TD on several domains. the state). Using VI to solve MDPs belonging to either of these special classes can be quite inefficient, since VI performs backups over the entire space, whereas the only backups useful for improving V fl are those on the \"frontier\" between already-correct and not-yet-correct V fl values. In fact, there are classical algorithms for both problem classes which compute V fl more efficiently by explicitly working backwards: for the deterministic class, Dijkstra's shortest-path algorithm; and for the acyclic class, Directed-Acyclic-Graph-Shortest-Paths (DAG-SP) [6]. 1 DAG-SP first topologically sorts the MDP, producing a linear ordering of the states in which every state x precedes all states reachable from x. Then, it runs through that list in reverse, performing one backup per state. Worst-case bounds for VI, Dijkstra, and DAG-SP in deterministic domains with X states and A actions/state are 1 Although [6] presents DAG-SP only for deterministic acyclic problems, it applies straightforwardly to the ",
+ "neighbors": [
+ 45,
+ 318,
+ 327,
+ 776
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 137,
+ "label": 6,
+ "text": "Title: Learning Markov chains with variable memory length from noisy output \nAbstract: The problem of modeling complicated data sequences, such as DNA or speech, often arises in practice. Most of the algorithms select a hypothesis from within a model class assuming that the observed sequence is the direct output of the underlying generation process. In this paper we consider the case when the output passes through a memoryless noisy channel before observation. In particular, we show that in the class of Markov chains with variable memory length, learning is affected by factors, which, despite being super-polynomial, are still small in some practical cases. Markov models with variable memory length, or probabilistic finite suffix automata, were introduced in learning theory by Ron, Singer and Tishby who also described a polynomial time learning algorithm [11, 12]. We present a modification of the algorithm which uses a noise-corrupted sample and has knowledge of the noise structure. The same algorithm is still viable if the noise is not known exactly but a good estimation is available. Finally, some experimental results are presented for removing noise from corrupted English text, and to measure how the performance of the learning algorithm is affected by the size of the noisy sample and the noise rate. ",
+ "neighbors": [
+ 3,
+ 333
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 138,
+ "label": 1,
+ "text": "Title: Distribution Category: Users Guide to the PGAPack Parallel Genetic Algorithm Library \nAbstract: The problem of modeling complicated data sequences, such as DNA or speech, often arises in practice. Most of the algorithms select a hypothesis from within a model class assuming that the observed sequence is the direct output of the underlying generation process. In this paper we consider the case when the output passes through a memoryless noisy channel before observation. In particular, we show that in the class of Markov chains with variable memory length, learning is affected by factors, which, despite being super-polynomial, are still small in some practical cases. Markov models with variable memory length, or probabilistic finite suffix automata, were introduced in learning theory by Ron, Singer and Tishby who also described a polynomial time learning algorithm [11, 12]. We present a modification of the algorithm which uses a noise-corrupted sample and has knowledge of the noise structure. The same algorithm is still viable if the noise is not known exactly but a good estimation is available. Finally, some experimental results are presented for removing noise from corrupted English text, and to measure how the performance of the learning algorithm is affected by the size of the noisy sample and the noise rate. ",
+ "neighbors": [
+ 27,
+ 205,
+ 420
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 139,
+ "label": 3,
+ "text": "Title: Bayesian Mixture Modeling by Monte Carlo Simulation \nAbstract: It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation. This method exhibits the true Bayesian predictive distribution, implicitly integrating over the entire underlying parameter space. An infinite number of mixture components can be accommodated without difficulty, using a prior distribution for mixing proportions that selects a reasonable subset of components to explain any finite training set. The need to decide on a \"correct\" number of components is thereby avoided. The feasibility of the method is shown empirically for a simple classification task. ",
+ "neighbors": [
+ 323
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 140,
+ "label": 4,
+ "text": "Title: Machine Learning, Efficient Reinforcement Learning through Symbiotic Evolution \nAbstract: This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications. ",
+ "neighbors": [
+ 285,
+ 325,
+ 563,
+ 634,
+ 709,
+ 1176
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 141,
+ "label": 3,
+ "text": "Title: Probabilistic evaluation of sequential plans from causal models with hidden variables \nAbstract: The paper concerns the probabilistic evaluation of plans in the presence of unmeasured variables, each plan consisting of several concurrent or sequential actions. We establish a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. When the criterion is satisfied, a closed-form expression is provided for the probability that the plan will achieve a specified goal.",
+ "neighbors": [
+ 225,
+ 742,
+ 1106
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 142,
+ "label": 5,
+ "text": "Title: Control Flow Prediction For Dynamic ILP Processors \nAbstract: We introduce a technique to enhance the ability of dynamic ILP processors to exploit (speculatively executed) parallelism. Existing branch prediction mechanisms used to establish a dynamic window from which ILP can be extracted are limited in their abilities to: (i) create a large, accurate dynamic window, (ii) initiate a large number of instructions into this window in every cycle, and (iii) traverse multiple branches of the control flow graph per prediction. We introduce control flow prediction which uses information in the control flow graph of a program to overcome these limitations. We discuss how information present in the control flow graph can be represented using multiblocks, and conveyed to the hardware using Control Flow Tables and Control Flow Prediction Buffers. We evaluate the potential of control flow prediction on an abstract machine and on a dynamic ILP processing model. Our results indicate that control flow prediction is a powerful and effective assist to the hardware in making more informed run time decisions about program control flow. ",
+ "neighbors": [
+ 381,
+ 1332
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 143,
+ "label": 3,
+ "text": "Title: Mean Field Theory for Sigmoid Belief Networks \nAbstract: We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition|the classification of handwritten digits.",
+ "neighbors": [
+ 16,
+ 17,
+ 42,
+ 60,
+ 61,
+ 176,
+ 240,
+ 336,
+ 341,
+ 375,
+ 411,
+ 424
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 144,
+ "label": 6,
+ "text": "Title: A Statistical Approach to Solving the EBL Utility Problem \nAbstract: Many \"learning from experience\" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees. ",
+ "neighbors": [
+ 50,
+ 502,
+ 541,
+ 1017
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 145,
+ "label": 4,
+ "text": "Title: A Modular Q-Learning Architecture for Manipulator Task Decomposition `Data storage in the cerebellar model ar\nAbstract: Compositional Q-Learning (CQ-L) (Singh 1992) is a modular approach to learning to perform composite tasks made up of several elemental tasks by reinforcement learning. Skills acquired while performing elemental tasks are also applied to solve composite tasks. Individual skills compete for the right to act and only winning skills are included in the decomposition of the composite task. We extend the original CQ-L concept in two ways: (1) a more general reward function, and (2) the agent can have more than one actuator. We use the CQ-L architecture to acquire skills for performing composite tasks with a simulated two-linked manipulator having large state and action spaces. The manipulator is a non-linear dynamical system and we require its end-effector to be at specific positions in the workspace. Fast function approximation in each of the Q-modules is achieved through the use of an array of Cerebellar Model Articulation Controller (CMAC) (Albus Our research interests involve the scaling up of machine learning methods, especially reinforcement learning, for autonomous robot control. We are interested in function approximators suitable for reinforcement learning in problems with large state spaces, such as the Cerebellar Model Articulation Controller (CMAC) (Albus 1975) which permit fast, online learning and good local generalization. In addition, we are interested in task decomposition by reinforcement learning and the use of hierarchical and modular function approximator architectures. We are examining the effectiveness of a modified Hierarchical Mixtures of Experts (HME) (Jordan & Jacobs 1993) approach for reinforcement learning since the original HME was developed mainly for supervised learning and batch learning tasks. The incorporation of domain knowledge into reinforcement learning agents is an important way of extending their capabilities. Default policies can be specified, and domain knowledge can also be used to restrict the size of the state-action space, leading to faster learning. We are investigating the use of Q-Learning (Watkins 1989) in planning tasks, using a classifier system (Holland 1986) to encode the necessary condition-action rules. Jordan, M. & Jacobs, R. (1993), Hierarchical mixtures of experts and the EM algorithm, Technical Report 9301, MIT Computational Cognitive Science. ",
+ "neighbors": [
+ 33,
+ 40,
+ 169,
+ 324,
+ 400
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 146,
+ "label": 2,
+ "text": "Title: Scaling-up RAAMs \nAbstract: Modifications to Recursive Auto-Associative Memory are presented, which allow it to store deeper and more complex data structures than previously reported. These modifications include adding extra layers to the compressor and reconstructor networks, employing integer rather than real-valued representations, pre-conditioning the weights and pre-setting the representations to be compatible with them. The resulting system is tested on a data set of syntactic trees extracted from the Penn Treebank.",
+ "neighbors": [
+ 4,
+ 662
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 147,
+ "label": 6,
+ "text": "Title: An Efficient Boosting Algorithm for Combining Preferences \nAbstract: The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for a restricted case. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. ",
+ "neighbors": [
+ 39,
+ 236,
+ 330,
+ 445
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 148,
+ "label": 0,
+ "text": "Title: Using Decision Trees to Improve Case-Based Learning \nAbstract: This paper shows that decision trees can be used to improve the performance of case-based learning (CBL) systems. We introduce a performance task for machine learning systems called semi-flexible prediction that lies between the classification task performed by decision tree algorithms and the flexible prediction task performed by conceptual clustering systems. In semi-flexible prediction, learning should improve prediction of a specific set of features known a priori rather than a single known feature (as in classification) or an arbitrary set of features (as in conceptual clustering). We describe one such task from natural language processing and present experiments that compare solutions to the problem using decision trees, CBL, and a hybrid approach that combines the two. In the hybrid approach, decision trees are used to specify the features to be included in k-nearest neighbor case retrieval. Results from the experiments show that the hybrid approach outperforms both the decision tree and case-based approaches as well as two case-based systems that incorporate expert knowledge into their case retrieval algorithms. Results clearly indicate that decision trees can be used to improve the performance of CBL systems and do so without reliance on potentially expensive expert knowledge.",
+ "neighbors": [
+ 242,
+ 371,
+ 564,
+ 1224
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 149,
+ "label": 2,
+ "text": "Title: Factor Analysis Using Delta-Rule Wake-Sleep Learning \nAbstract: Technical Report No. 9607, Department of Statistics, University of Toronto We describe a linear network that models correlations between real-valued visible variables using one or more real-valued hidden variables a factor analysis model. This model can be seen as a linear version of the Helmholtz machine, and its parameters can be learned using the wake-sleep method, in which learning of the primary generative model is assisted by a recognition model, whose role is to fill in the values of hidden variables based on the values of visible variables. The generative and recognition models are jointly learned in wake and sleep phases, using just the delta rule. This learning procedure is comparable in simplicity to Oja's version of Hebbian learning, which produces a somewhat different representation of correlations in terms of principal components. We argue that the simplicity of wake-sleep learning makes factor analysis a plau sible alternative to Hebbian learning as a model of activity-dependent cortical plasticity.",
+ "neighbors": [
+ 19,
+ 274,
+ 387
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 150,
+ "label": 2,
+ "text": "Title: Using Dirichlet Mixture Priors to Derive Hidden Markov Models for Protein Families \nAbstract: A Bayesian method for estimating the amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the columns of a multiple alignment of that family is introduced. This method uses Dirichlet mixture densities as priors over amino acid distributions. These mixture densities are determined from examination of previously constructed HMMs or multiple alignments. It is shown that this Bayesian method can improve the quality of HMMs produced from small training sets. Specific experiments on the EF-hand motif are reported, for which these priors are shown to produce HMMs with higher likelihood on unseen data, and fewer false positives and false negatives in a database search task. ",
+ "neighbors": [
+ 0,
+ 3,
+ 156,
+ 244,
+ 314,
+ 591,
+ 630
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 151,
+ "label": 0,
+ "text": "Title: How to Get a Free Lunch: A Simple Cost Model for Machine Learning Applications \nAbstract: This paper proposes a simple cost model for machine learning applications based on the notion of net present value. The model extends and unifies the models used in (Pazzani et al., 1994) and (Masand & Piatetsky-Shapiro, 1996). It attempts to answer the question \"Should a given machine learning system now in the prototype stage be fielded?\" The model's inputs are the system's confusion matrix, the cash flow matrix for the application, the cost per decision, the one-time cost of deploying the system, and the rate of return on investment. Like Provost and Fawcett's (1997) ROC convex hull method, the present model can be used for decision-making even when its input variables are not known exactly. Despite its simplicity, it has a number of non-trivial consequences. For example, under it the \"no free lunch\" theorems of learning theory no longer apply. ",
+ "neighbors": [
+ 128,
+ 186,
+ 339
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 152,
+ "label": 3,
+ "text": "Title: ASPECTS OF GRAPHICAL MODELS CONNECTED WITH CAUSALITY \nAbstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. ",
+ "neighbors": [
+ 740,
+ 850,
+ 1139,
+ 1294
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 153,
+ "label": 2,
+ "text": "Title: Soft Vector Quantization and the EM Algorithm Running Title: Soft Vector Quantization and EM Section:\nAbstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. ",
+ "neighbors": [
+ 366
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 154,
+ "label": 2,
+ "text": "Title: Non-linear Models for Time Series Using Mixtures of Experts \nAbstract: We consider a novel non-linear model for time series analysis. The study of this model emphasizes both theoretical aspects as well as practical applicability. The architecture of the model is demonstrated to be sufficiently rich, in the sense of approximating unknown functional forms, yet it retains some of the simple and intuitive characteristics of linear models. A comparison to some more established non-linear models will be emphasized, and theoretical issues are backed by prediction results for benchmark time series, as well as computer generated data sets. Efficient estimation algorithms are seen to be applicable, made possible by the mixture based structure of the model. Large sample properties of the estimators are discussed as well, in both well specified as well as misspecified settings. We also demonstrate how inference pertaining to the data structure may be made from the parameterization of the model, resulting in a better, more intuitive, understanding of the structure and performance of the model.",
+ "neighbors": [
+ 40,
+ 388,
+ 1244
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 155,
+ "label": 6,
+ "text": "Title: On Learning from Noisy and Incomplete Examples \nAbstract: We investigate learnability in the PAC model when the data used for learning, attributes and labels, is either corrupted or incomplete. In order to prove our main results, we define a new complexity measure on statistical query (SQ) learning algorithms. The view of an SQ algorithm is the maximum over all queries in the algorithm, of the number of input bits on which the query depends. We show that a restricted view SQ algorithm for a class is a general sufficient condition for learnability in both the models of attribute noise and covered (or missing) attributes. We further show that since the algorithms in question are statistical, they can also simultaneously tolerate classification noise. Classes for which these results hold, and can therefore be learned with simultaneous attribute noise and classification noise, include k-DNF, k-term-DNF by DNF representations, conjunctions with few relevant variables, and over the uniform distribution, decision lists. These noise models are the first PAC models in which all training data, attributes and labels, may be corrupted by a random process. Previous researchers had shown that the class of k-DNF is learnable with attribute noise if the attribute noise rate is known exactly. We show that all of our attribute noise learnabil-ity results, either with or without classification noise, also hold when the exact noise rate is not Appeared in Proceedings of the Eighth Annual ACM Conference on Computational Learning Theory. ACM Press, July 1995. known, provided that the learner instead has a polynomially good approximation of the noise rate. In addition, we show that the results also hold when there is not one single noise rate, but a distinct noise rate for each attribute. Our results for learning with random covering do not require the learner to be told even an approximation of the covering rate and in addition hold in the setting with distinct covering rates for each attribute. Finally, we give lower bounds on the number of examples required for learning in the presence of attribute noise or covering.",
+ "neighbors": [
+ 8,
+ 259
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 156,
+ "label": 2,
+ "text": "Title: Finding Genes in DNA with a Hidden Markov Model \nAbstract: This study describes a new Hidden Markov Model (HMM) system for segmenting uncharacterized genomic DNA sequences into exons, introns, and intergenic regions. Separate HMM modules were designed and trained for specific regions of DNA: exons, introns, intergenic regions, and splice sites. The models were then tied together to form a biologically feasible topology. The integrated HMM was trained further on a set of eukaryotic DNA sequences, and tested by using it to segment a separate set of sequences. The resulting HMM system, which is called VEIL (Viterbi Exon-Intron Locator), obtains an overall accuracy on test data of 92% of total bases correctly labelled, with a correlation coefficient of 0.73. Using the more stringent test of exact exon prediction, VEIL correctly located both ends of 53% of the coding exons, and 49% of the exons it predicts are exactly correct. These results compare favorably to the best previous results for gene structure prediction, and demonstrate the benefits of using HMMs for this problem.",
+ "neighbors": [
+ 3,
+ 131,
+ 150,
+ 358,
+ 360,
+ 1090,
+ 1299
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 157,
+ "label": 3,
+ "text": "Title: Generalized Update: Belief Change in Dynamic Settings \nAbstract: Belief revision and belief update have been proposed as two types of belief change serving different purposes. Belief revision is intended to capture changes of an agent's belief state reflecting new information about a static world. Belief update is intended to capture changes of belief in response to a changing world. We argue that both belief revision and belief update are too restrictive; routine belief change involves elements of both. We present a model for generalized update that allows updates in response to external changes to inform the agent about its prior beliefs. This model of update combines aspects of revision and update, providing a more realistic characterization of belief change. We show that, under certain assumptions, the original update postulates are satisfied. We also demonstrate that plain revision and plain update are special cases of our model, in a way that formally verifies the intuition that revision is suitable for static belief change.",
+ "neighbors": [
+ 160,
+ 195,
+ 196,
+ 265
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 158,
+ "label": 2,
+ "text": "Title: A Performance Analysis of CNS-1 on Sparse Connectionist Networks \nAbstract: This report deals with the efficient mapping of sparse neural networks on CNS-1. We develop parallel vector code for an idealized sparse network and determine its performance under three memory systems. We use the code to evaluate the memory systems (one of which will be implemented in the prototype), and to pinpoint bottlenecks in the current CNS-1 design. ",
+ "neighbors": [
+ 296,
+ 532
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 159,
+ "label": 4,
+ "text": "Title: Some Experiments with a Hybrid Model for Learning Sequential Decision Making \nAbstract: In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings.",
+ "neighbors": [
+ 114,
+ 273,
+ 328
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 160,
+ "label": 3,
+ "text": "Title: A Qualitative Markov Assumption and Its Implications for Belief Change \nAbstract: The study of belief change has been an active area in philosophy and AI. In recent years, two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly speaking, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative settings. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model.",
+ "neighbors": [
+ 157,
+ 196,
+ 265,
+ 1115,
+ 1292
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 161,
+ "label": 4,
+ "text": "Title: Applying Online Search Techniques to Continuous-State Reinforcement Learning key to the success of the local\nAbstract: In this paper, we describe methods for efficiently computing better solutions to control problems in continuous state spaces. We provide algorithms that exploit online search to boost the power of very approximate value functions discovered by traditional reinforcement learning techniques. We examine local searches, where the agent performs a finite-depth lookahead search, and global searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of the global methods lies in using aggressive state-space search techniques such as uniform-cost search and A fl , tamed into a tractable form by exploiting neighborhood relations and trajectory constraints that arise from continuous-space dynamic control. ",
+ "neighbors": [
+ 169,
+ 276,
+ 300,
+ 318,
+ 329
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 162,
+ "label": 3,
+ "text": "Title: USING SMOOTHING SPLINE ANOVA TO EXAMINE THE RELATION OF RISK FACTORS TO THE INCIDENCE AND\nAbstract: This paper presents recent developments toward a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jef-frey's Rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate reasoning about causal projections, explanations, actions and change. ",
+ "neighbors": [
+ 109,
+ 298
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 163,
+ "label": 4,
+ "text": "Title: Clay: Integrating Motor Schemas and Reinforcement Learning \nAbstract: Clay is an evolutionary architecture for autonomous robots that integrates motor schema-based control and reinforcement learning. Robots utilizing Clay benefit from the real-time performance of motor schemas in continuous and dynamic environments while taking advantage of adaptive reinforcement learning. Clay coordinates assemblages (groups of motor schemas) using embedded reinforcement learning modules. The coordination modules activate specific assemblages based on the presently perceived situation. Learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Experiments in a robot soccer simulation illustrate the performance and utility of the system.",
+ "neighbors": [
+ 260,
+ 500
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 164,
+ "label": 2,
+ "text": "Title: Cortical Synchronization and Perceptual Framing \nAbstract: Clay is an evolutionary architecture for autonomous robots that integrates motor schema-based control and reinforcement learning. Robots utilizing Clay benefit from the real-time performance of motor schemas in continuous and dynamic environments while taking advantage of adaptive reinforcement learning. Clay coordinates assemblages (groups of motor schemas) using embedded reinforcement learning modules. The coordination modules activate specific assemblages based on the presently perceived situation. Learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Experiments in a robot soccer simulation illustrate the performance and utility of the system.",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 165,
+ "label": 6,
+ "text": "Title: Learning Switching Concepts \nAbstract: We consider learning in situations where the function used to classify examples may switch back and forth between a small number of different concepts during the course of learning. We examine several models for such situations: oblivious models in which switches are made independent of the selection of examples, and more adversarial models in which a single adversary controls both the concept switches and example selection. We show relationships between the more benign models and the p-concepts of Kearns and Schapire, and present polynomial-time algorithms for learning switches between two k-DNF formulas. For the most adversarial model, we present a model of success patterned after the popular competitive analysis used in studying on-line algorithms. We describe a randomized query algorithm for such adversarial switches between two monotone disjunctions that is \"1-competitive\" in that the total number of mistakes plus queries is with high probability bounded by the number of switches plus some fixed polynomial in n (the number of variables). We also use notions described here to provide sufficient conditions under which learning a p-concept class \"with a decision rule\" implies being able to learn the class \"with a model of probability.\" ",
+ "neighbors": [
+ 316,
+ 346
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 166,
+ "label": 0,
+ "text": "Title: A Formal Analysis of Case Base Retrieval \nAbstract: Case based systems typically retrieve cases from the case base by applying similarity measures. The measures are usually constructed in an ad hoc manner. This report presents a toolbox for the systematic construction of similarity measures. In addition to paving the way to a design methodology for similarity measures, this systematic approach facilitates the identification of opportunities for parallelisation in case base retrieval.",
+ "neighbors": [
+ 37,
+ 41,
+ 775,
+ 1132,
+ 1230
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 167,
+ "label": 0,
+ "text": "Title: A theory of questions and question asking \nAbstract: ",
+ "neighbors": [
+ 35,
+ 718,
+ 834,
+ 854,
+ 855,
+ 857,
+ 1297
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 168,
+ "label": 1,
+ "text": "Title: 20 Data Structures and Genetic Programming two techniques for reducing run time. \nAbstract: In real world applications, software engineers recognise the use of memory must be organised via data structures and that software using the data must be independant of the data structures' implementation details. They achieve this by using abstract data structures, such as records, files and buffers. We demonstrate that genetic programming can automatically implement simple abstract data structures, considering in detail the task of evolving a list. We show general and reasonably efficient implementations can be automatically generated from simple primitives. A model for maintaining evolved code is demonstrated using the list problem. Much published work on genetic programming (GP) evolves functions without side-effects to learn patterns in test data. In contrast human written programs often make extensive and explicit use of memory. Indeed memory in some form is required for a programming system to be Turing Complete, i.e. for it to be possible to write any (computable) program in that system. However inclusion of memory can make the interactions between parts of programs much more complex and so make it harder to produce programs. Despite this it has been shown GP can automatically create programs which explicitly use memory [Teller 1994]. In both normal and genetic programming considerable benefits have been found in adopting a structured approach. For example [Koza 1994] shows the introduction of evolvable code modules (automatically defined functions, ADFs) can greatly help GP to reach a solution. We suggest that a corresponding structured approach to use of data will similarly have significant advantage to GP. Earlier work has demonstrated that genetic programming can automatically generate simple abstract data structures, namely stacks and queues [Langdon 1995a]. That is, GP can evolve programs that organise memory (accessed via simple read and write primitives) into data structures which can be used by external software without it needing to know how they are implemented. This chapter shows it is possible to evolve a list data structure from basic primitives. [Aho, Hopcroft and Ullman 1987] suggest three different ways to implement a list but these experiments show GP can evolve its own implementation. This requires all the list components to agree on one implementation as they co-evolve together. Section 20.3 describes the GP architecture, including use of Pareto multiple component fitness scoring (20.3.4) and measures aimed at speeding the GP search (20.3.5). The evolved solutions are described in Section 20.4. Section 20.5 presents a candidate model for maintaining evolved software. This is followed by a discussion of what we have learned (20.6) and conclusions that can be drawn (20.7). ",
+ "neighbors": [
+ 91,
+ 1034,
+ 1161
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 169,
+ "label": 4,
+ "text": "Title: References elements that can solve difficult learning control problems. on Simulation of Adaptive Behavior, pages\nAbstract: Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. ",
+ "neighbors": [
+ 9,
+ 46,
+ 48,
+ 59,
+ 83,
+ 95,
+ 104,
+ 108,
+ 145,
+ 161,
+ 263,
+ 264,
+ 272,
+ 276,
+ 287,
+ 320,
+ 326,
+ 327,
+ 328,
+ 344,
+ 370,
+ 372,
+ 374,
+ 760,
+ 931,
+ 1250,
+ 1266
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 170,
+ "label": 4,
+ "text": "Title: A Neuro-Dynamic Programming Approach to Retailer Inventory Management 1 \nAbstract: Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. ",
+ "neighbors": [
+ 45,
+ 120,
+ 269,
+ 327,
+ 976
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 171,
+ "label": 2,
+ "text": "Title: Automatic Feature Extraction in Machine Learning \nAbstract: This thesis presents a machine learning model capable of extracting discrete classes out of continuous valued input features. This is done using a neurally inspired novel competitive classifier (CC) which feeds the discrete classifications forward to a supervised machine learning model. The supervised learning model uses the discrete classifications and perhaps other information available to solve a problem. The supervised learner then generates feedback to guide the CC into potentially more useful classifications of the continuous valued input features. Two supervised learning models are combined with the CC creating ASOCS-AFE and ID3-AFE. Both models are simulated and the results are analyzed. Based on these results, several areas of future research are proposed. ",
+ "neighbors": [
+ 247,
+ 470,
+ 738
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 172,
+ "label": 6,
+ "text": "Title: On the Approximability of Numerical Taxonomy (Fitting Distances by Tree Metrics) \nAbstract: We consider the problem of fitting an n fi n distance matrix D by a tree metric T . Let \" be the distance to the closest tree metric, that is, \" = min T fk T; D k 1 g. First we present an O(n 2 ) algorithm for finding an additive tree T such that k T; D k 1 3\", giving the first algorithm for this problem with a performance guarantee. Second we show that it is N P-hard to find a tree T such that k T; D k 1 < 9 ",
+ "neighbors": [
+ 431,
+ 998,
+ 1104
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 173,
+ "label": 0,
+ "text": "Title: Storing and Indexing Plan Derivations through Explanation-based Analysis of Retrieval Failures \nAbstract: Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner dersnlp+ebl. der-snlp+ebl extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure.",
+ "neighbors": [
+ 347,
+ 906
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 174,
+ "label": 2,
+ "text": "Title: Data Exploration with Reflective Adaptive Models \nAbstract: Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner dersnlp+ebl. der-snlp+ebl extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure.",
+ "neighbors": [
+ 24,
+ 135
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 175,
+ "label": 5,
+ "text": "Title: Confidence Estimation for Speculation Control \nAbstract: Modern processors improve instruction level parallelism by speculation. The outcome of data and control decisions is predicted, and the operations are speculatively executed and only committed if the original predictions were correct. There are a number of other ways that processor resources could be used, such as threading or eager execution. As the use of speculation increases, we believe more processors will need some form of speculation control to balance the benefits of speculation against other possible activities. Confidence estimation is one technique that can be exploited by architects for speculation control. In this paper, we introduce performance metrics to compare confidence estimation mechanisms, and argue that these metrics are appropriate for speculation control. We compare a number of confidence estimation mechanisms, focusing on mechanisms that have a small implementation cost and gain benefit by exploiting characteristics of branch predictors, such as clustering of mispredicted branches. We compare the performance of the different confidence estimation methods using detailed pipeline simulations. Using these simulations, we show how to improve some confidence estimators, providing better insight for future investigations comparing and applying confidence estimators. ",
+ "neighbors": [
+ 243,
+ 349
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 176,
+ "label": 2,
+ "text": "Title: Boltzmann Machine learning using mean field theory and linear response correction \nAbstract: We present a new approximate learning algorithm for Boltzmann Machines, using a systematic expansion of the Gibbs free energy to second order in the weights. The linear response correction to the correlations is given by the Hessian of the Gibbs free energy. The computational complexity of the algorithm is cubic in the number of neurons. We compare the performance of the exact BM learning algorithm with first order (Weiss) mean field theory and second order (TAP) mean field theory. The learning task consists of a fully connected Ising spin glass model on 10 neurons. We conclude that 1) the method works well for paramagnetic problems 2) the TAP correction gives a significant improvement over the Weiss mean field theory, both for paramagnetic and spin glass problems and 3) that the inclusion of diagonal weights improves the Weiss approximation for paramagnetic problems, but not for spin glass problems.",
+ "neighbors": [
+ 61,
+ 143,
+ 240,
+ 813,
+ 1035
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 177,
+ "label": 4,
+ "text": "Title: Solving Combinatorial Optimization Tasks by Reinforcement Learning: A General Methodology Applied to Resource-Constrained Scheduling \nAbstract: This paper introduces a methodology for solving combinatorial optimization problems through the application of reinforcement learning methods. The approach can be applied in cases where several similar instances of a combinatorial optimization problem must be solved. The key idea is to analyze a set of \"training\" problem instances and learn a search control policy for solving new problem instances. The search control policy has the twin goals of finding high-quality solutions and finding them quickly. Results of applying this methodology to a NASA scheduling problem show that the learned search control policy is much more effective than the best known non-learning search procedure|a method based on simulated annealing.",
+ "neighbors": [
+ 45,
+ 232,
+ 318,
+ 327
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 178,
+ "label": 4,
+ "text": "Title: Learning Curve Bounds for Markov Decision Processes with Undiscounted Rewards \nAbstract: Markov decision processes (MDPs) with undis-counted rewards represent an important class of problems in decision and control. The goal of learning in these MDPs is to find a policy that yields the maximum expected return per unit time. In large state spaces, computing these averages directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. In this case, longer exploration times enable more accurate estimates and more informed decision-making. The learning curve for an MDP measures how the agent's performance depends on the allowed exploration time, T . In this paper we analyze these learning curves for a simple control problem with undiscounted rewards. In particular, methods from statistical mechanics are used to calculate lower bounds on the agent's performance in the thermodynamic limit T ! 1, N ! 1, ff = T =N (finite), where T is the number of time steps allotted per policy evaluation and N is the size of the state space. In this limit, we provide a lower bound on the return of policies that appear optimal based on imperfect statistics.",
+ "neighbors": [
+ 30,
+ 318,
+ 320,
+ 327,
+ 556,
+ 774
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 179,
+ "label": 6,
+ "text": "Title: The Power of Self-Directed Learning \nAbstract: This paper studies self-directed learning, a variant of the on-line learning model in which the learner selects the presentation order for the instances. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, k-term DNF formulas, and orthogonal rectangles in f0; 1; ; n1g d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then prove that the model of self-directed learning is more powerful than all other commonly used on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis dimension. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes. fl Supported in part by a GE Foundation Junior Faculty Grant and NSF Grant CCR-9110108. Part of this research was conducted while the author was at the M.I.T. Laboratory for Computer Science and supported by NSF grant DCR-8607494 and a grant from the Siemens Corporation. Net address: sg@cs.wustl.edu. ",
+ "neighbors": [
+ 2,
+ 808,
+ 1082
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 180,
+ "label": 2,
+ "text": "Title: Forecasting electricity demand using nonlinear mixture of experts \nAbstract: In this paper we study a forecasting model based on mixture of experts for predicting the French electric daily consumption energy. We split the task into two parts. Using mixture of experts, a first model predicts the electricity demand from the exogenous variables (such as temperature and degree of cloud cover) and can be viewed as a nonlinear regression model of mixture of Gaussians. Using a single neural network, a second model predicts the evolution of the residual error of the first one, and can be viewed as an nonlinear autoregression model. We analyze the splitting of the input space generated by the mixture of experts model, and compare the performance to models presently used. ",
+ "neighbors": [
+ 40,
+ 388,
+ 432,
+ 1284
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 181,
+ "label": 3,
+ "text": "Title: Chain graphs for learning \nAbstract: ",
+ "neighbors": [
+ 240,
+ 336,
+ 377,
+ 448
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 182,
+ "label": 0,
+ "text": "Title: The Case for Graph-Structured Representations \nAbstract: Case-based reasoning involves reasoning from cases: specific pieces of experience, the reasoner's or another's, that can be used to solve problems. We use the term \"graph-structured\" for representations that (1) are capable of expressing the relations between any two objects in a case, (2) allow the set of relations used to vary from case to case, and (3) allow the set of possible relations to be expanded as necessary to describe new cases. Such representations can be implemented as, for example, semantic networks or lists of concrete propositions in some logic. We believe that graph-structured representations offer significant advantages, and thus we are investigating ways to implement such representations efficiently. We make a \"case-based argument\" using examples from two systems, chiron and caper, to show how a graph-structured representation supports two different kinds of case-based planning in two different domains. We discuss the costs associated with graph-structured representations and describe an approach to reducing those costs, imple mented in caper.",
+ "neighbors": [
+ 464,
+ 761,
+ 775,
+ 915
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 183,
+ "label": 5,
+ "text": "Title: Employing Linear Regression in Regression Tree Leaves \nAbstract: The advantage of using linear regression in the leaves of a regression tree is analysed in the paper. It is carried out how this modification affects the construction, pruning and interpretation of a regression tree. The modification is tested on artificial and real-life domains. The results show that the modification is beneficial as it leads to smaller classification errors of induced regression trees. Keywords: machine learning, TDIDT, regression, linear regression, Bayesian approach. ",
+ "neighbors": [
+ 290,
+ 612,
+ 701,
+ 936,
+ 953
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 184,
+ "label": 5,
+ "text": "Title: Cortical Functionality Emergence: Self-Organization of Complex Structures: From Individual to Collective Dynamics, \nAbstract: A Methodology for Evaluating Theory Revision Systems: Results Abstract Theory revision systems are learning systems that have a goal of making small changes to an original theory to account for new data. A measure for the distance between two theories is proposed. This measure corresponds to the minimum number of edit operations at the literal level required to transform one theory into another. By computing the distance between an original theory and a revised theory, the claim that a theory revision system makes few revisions to a theory may be quantitatively evaluated. We present data using both accuracy and the distance metric on Audrey II, with Audrey II fl",
+ "neighbors": [
+ 198
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 185,
+ "label": 0,
+ "text": "Title: Generalizing from Case Studies: A Case Study \nAbstract: Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described.",
+ "neighbors": [
+ 239,
+ 250,
+ 566,
+ 917
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 186,
+ "label": 2,
+ "text": "Title: Appears in Working Notes, Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms\nAbstract: This paper presents the Plannett system, which combines artificial neural networks to achieve expert- level accuracy on the difficult scientific task of recognizing volcanos in radar images of the surface of the planet Venus. Plannett uses ANNs that vary along two dimensions: the set of input features used to train and the number of hidden units. The ANNs are combined simply by averaging their output activations. When Plannett is used as the classification module of a three-stage image analysis system called JAR- tool, the end-to-end accuracy (sensitivity and specificity) is as good as that of a human planetary geologist on a four-image test suite. JARtool-Plannett also achieves the best algorithmic accuracy on these images to date. ",
+ "neighbors": [
+ 151
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 187,
+ "label": 4,
+ "text": "Title: Planning with Closed-Loop Macro Actions \nAbstract: Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Conventional model-based reinforcement learning uses primitive actions that last one time step and that can be modeled independently of the learning agent. These can be generalized to macro actions, multi-step actions specified by an arbitrary policy and a way of completing. Macro actions generalize the classical notion of a macro operator in that they are closed loop, uncertain, and of variable duration. Macro actions are needed to represent common-sense higher-level actions such as going to lunch, grasping an object, or traveling to a distant city. This paper generalizes prior work on temporally abstract models (Sutton 1995) and extends it from the prediction setting to include actions, control, and planning. We define a semantics of models of macro actions that guarantees the validity of planning using such models. This paper present new results in the theory of planning with macro actions and illustrates its potential advantages in a gridworld task. ",
+ "neighbors": [
+ 328,
+ 1053,
+ 1147
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 188,
+ "label": 6,
+ "text": "Title: Statistical Tests for Comparing Supervised Classification Learning Algorithms \nAbstract: This paper reviews five statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type 1 error). Two widely-used statistical tests are shown to have high probability of Type I error in certain situations and should never be used. These tests are (a) a test for the difference of two proportions and (b) a paired-differences t test based on taking several random train/test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type I error. The fifth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation. Experiments show that this test also has good Type I error. The paper also measures the power (ability to detect algorithm differences when they do exist) of these tests. The 5x2cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, McNemar's test is the only test with acceptable Type I error. For algorithms that can be executed ten times, the 5x2cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set. ",
+ "neighbors": [
+ 4,
+ 89,
+ 556,
+ 587,
+ 917,
+ 1280
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 189,
+ "label": 3,
+ "text": "Title: BUCKET ELIMINATION: A UNIFYING FRAMEWORK FOR PROBABILISTIC INFERENCE \nAbstract: Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing in the probabilistic inference literature and clarifies the relationship of such algorithms to nonserial dynamic programming algorithms. A general method for combining conditioning and bucket elimination is also presented. For all the algorithms, bounds on complexity are given as a function of the problem's structure. ",
+ "neighbors": [
+ 34,
+ 190,
+ 192,
+ 223
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 190,
+ "label": 3,
+ "text": "Title: Global Conditioning for Probabilistic Inference in Belief Networks \nAbstract: In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loop-cutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.",
+ "neighbors": [
+ 189,
+ 546,
+ 852
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 191,
+ "label": 2,
+ "text": "Title: From Data Distributions to Regularization in Invariant Learning \nAbstract: Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed (or distorted) examples to the training data. The cost function for the enhanced training set is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice - a term that penalizes changes in the output when the inputs are transformed under the group. For infinitesimal transformations, the coefficient of the regularization term reduces to the variance of the distortions introduced into the training data. This correspondence provides a simple bridge between the two approaches. ",
+ "neighbors": [
+ 57,
+ 450
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 192,
+ "label": 3,
+ "text": "Title: Exploiting Causal Independence in Bayesian Network Inference \nAbstract: A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as or, sum or max, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.",
+ "neighbors": [
+ 34,
+ 189,
+ 223,
+ 606
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 193,
+ "label": 4,
+ "text": "Title: A Comparison of Action Selection Learning Methods \nAbstract: Our goal is to develop a hybrid cognitive model of how humans acquire skills on complex cognitive tasks. We are pursuing this goal by designing hybrid computational architectures for the NRL Navigation task, which requires competent senso-rimotor coordination. In this paper, we empirically compare two methods for control knowledge acquisition (reinforcement learning and a novel variant of action models), as well as a hybrid of these methods, with human learning on this task. Our results indicate that the performance of our action models approach more closely approximates the rate of human learning on the task than does reinforcement learning or the hybrid. We also experimentally explore the impact of background knowledge on system performance. By adding knowledge used by the action models system to the benchmark reinforcement learner, we elevate its performance above that of the action models system. ",
+ "neighbors": [
+ 262,
+ 272,
+ 327,
+ 328
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 194,
+ "label": 5,
+ "text": "Title: Incremental Reduced Error Pruning \nAbstract: This paper outlines some problems that may occur with Reduced Error Pruning in relational learning algorithms, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of the algorithm cannot be recommended for domains which require a very specific concept description.",
+ "neighbors": [
+ 198,
+ 217,
+ 239,
+ 342
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 195,
+ "label": 3,
+ "text": "Title: Abduction as Belief Revision \nAbstract: We propose a model of abduction based on the revision of the epistemic state of an agent. Explanations must be sufficient to induce belief in the sentence to be explained (for instance, some observation), or ensure its consistency with other beliefs, in a manner that adequately accounts for factual and hypothetical sentences. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. To illustrate the generality of our approach, we reconstruct two of the key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within our framework. This reconstruction provides an alternative semantics for both and extends these systems to accommodate our predictive explanations and semantic preferences on explanations. It also illustrates how more general information can be incorporated in a principled manner. fl Some parts of this paper appeared in preliminary form as Abduction as Belief Revision: A Model of Preferred Explanations, Proc. of Eleventh National Conf. on Artificial Intelligence (AAAI-93), Washington, DC, pp.642-648 (1993). ",
+ "neighbors": [
+ 157,
+ 196,
+ 864,
+ 895
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 196,
+ "label": 3,
+ "text": "Title: Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. \nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
+ "neighbors": [
+ 157,
+ 160,
+ 195,
+ 265,
+ 451,
+ 987,
+ 1048,
+ 1077,
+ 1292
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 197,
+ "label": 1,
+ "text": "Title: A Promising genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems \nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
+ "neighbors": [
+ 622,
+ 715,
+ 731,
+ 847,
+ 876
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 198,
+ "label": 5,
+ "text": "Title: Quinlan, 1990 J.R. Quinlan. Learning logical definitions from relations. Machine Learning, First-order theory revision. In\nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
+ "neighbors": [
+ 184,
+ 194,
+ 200,
+ 299,
+ 393,
+ 573,
+ 701,
+ 716,
+ 735,
+ 801,
+ 802,
+ 907,
+ 908,
+ 930,
+ 1019,
+ 1142,
+ 1168,
+ 1189,
+ 1190,
+ 1247,
+ 1320
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 199,
+ "label": 3,
+ "text": "Title: A Reference Bayesian Test for Nested Hypotheses And its Relationship to the Schwarz Criterion \nAbstract: We build up the mathematical connection between the \"Expectation-Maximization\" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00000-00-A-0000. The authors were also supported by the HK RGC Earmarked Grant CUHK250/94E, by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-1-0777 from the Office of Naval Research. Michael I. Jordan is an NSF Presidential Young Investigator. ",
+ "neighbors": [
+ 47,
+ 254,
+ 414,
+ 570
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 200,
+ "label": 5,
+ "text": "Title: First Order Regression: Applications in Real-World Domains \nAbstract: A first order regression algorithm capable of handling real-valued (continuous) variables is introduced and some of its applications are presented. Regressional learning assumes real-valued class and discrete or real-valued variables. The algorithm combines regressional learning with standard ILP concepts, such as first order concept description and background knowledge. A clause is generated by successively refining the initial clause by adding literals of the form A = v for the discrete attributes, A v and A v for the real-valued attributes, and background knowledge literals to the clause body. The algorithm employs a covering approach (beam search), a heuristic impurity function, and stopping criteria based on local improvement, minimum number of examples, maximum clause length, minimum local improvement, minimum description length, allowed error, and variable depth. An outline of the algorithm and the results of the system's application in some artificial and real-world domains are presented. The real-world domains comprise: modelling of the water behavior in a surge tank, modelling of the workpiece roughness in a steel grinding process and modelling of the operator's behavior during the process of electrical discharge machining. Special emphasis is given to the evaluation of obtained models by domain experts and their comments on the aspects of practical use of the induced knowledge. The results obtained during the knowledge acquisition process show several important guidelines for knowledge acquisition, concerning mainly the process of interaction with domain experts, exposing primarily the importance of comprehensibility of the induced knowledge.",
+ "neighbors": [
+ 198,
+ 374,
+ 701
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 201,
+ "label": 2,
+ "text": "Title: Induction of Multiscale Temporal Structure \nAbstract: Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time|e.g., relations among notes within a musical phrase|but not structure that occurs over longer time periods|e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1 can map a sequence of inputs to a sequence of outputs. Learning structure in temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus, back propagation.",
+ "neighbors": [
+ 99,
+ 113,
+ 240,
+ 421,
+ 446
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 202,
+ "label": 3,
+ "text": "Title: Convergence controls for MCMC algorithms, with applications to hidden Markov chains \nAbstract: In complex models like hidden Markov chains, the convergence of the MCMC algorithms used to approximate the posterior distribution and the Bayes estimates of the parameters of interest must be controlled in a robust manner. We propose in this paper a series of on-line controls, which rely on classical non-parametric tests, to evaluate independence from the start-up distribution, stability of the Markov chain, and asymptotic normality. These tests lead to graphical control spreadsheets which are presented in the set-up of normal mixture hidden Markov chains to compare the full Gibbs sampler with an aggregated Gibbs sampler based on the forward-backward formulae. ",
+ "neighbors": [
+ 21,
+ 772
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 203,
+ "label": 2,
+ "text": "Title: Generalization and Exclusive Allocation of Credit in Unsupervised Category Learning \nAbstract: Acknowledgements: This research was supported in part by the Office of Naval Research (Cognitive and Neural Sciences, N00014-93-1-0208) and by the Whitaker Foundation (Special Opportunity Grant). We thank George Kalarickal, Charles Schmitt, William Ross, and Douglas Kelly for valuable discussions. ",
+ "neighbors": [
+ 335,
+ 430,
+ 620,
+ 871,
+ 1099
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 204,
+ "label": 2,
+ "text": "Title: A Flexible Model For Human Circadian Rhythms \nAbstract: Many hormones and other physiological processes vary in a circadian pattern. Although a sine/cosine function can be used to model these patterns, this functional form is not appropriate when there is asymmetry between the peak and nadir phases. In this paper we describe a semi-parametric periodic spline function that can be fit to circadian rhythms. The model includes both phase and amplitude so that the time and the magnitude of the peak or nadir can be estimated. We also describe tests of fit for components in the model. Data from an experiment to study immunological responses in humans are used to demonstrate the methods. ",
+ "neighbors": [
+ 291
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 205,
+ "label": 1,
+ "text": "Title: Genetic Algorithms as Multi-Coordinators in Large-Scale Optimization \nAbstract: We present high-level, decomposition-based algorithms for large-scale block-angular optimization problems containing integer variables, and demonstrate their effectiveness in the solution of large-scale graph partitioning problems. These algorithms combine the subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods with knapsack and genetic approaches to the utilization of \"building blocks\" of partial solutions. Even for graph partitioning problems requiring billions of variables in a standard 0-1 formulation, this approach produces high-quality solutions (as measured by deviations from an easily computed lower bound), and substantially outperforms widely-used graph partitioning techniques based on heuristics and spectral methods.",
+ "neighbors": [
+ 138,
+ 466,
+ 1107
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 206,
+ "label": 3,
+ "text": "Title: Hierarchical Spatio-Temporal Mapping of Disease Rates \nAbstract: Maps of regional morbidity and mortality rates are useful tools in determining spatial patterns of disease. Combined with socio-demographic census information, they also permit assessment of environmental justice, i.e., whether certain subgroups suffer disproportionately from certain diseases or other adverse effects of harmful environmental exposures. Bayes and empirical Bayes methods have proven useful in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining geographic resolution. In this paper we extend existing hierarchical spatial models to account for temporal effects and spatio-temporal interactions. Fitting the resulting highly-parametrized models requires careful implementation of Markov chain Monte Carlo (MCMC) methods, as well as novel techniques for model evaluation and selection. We illustrate our approach using a dataset of county-specific lung cancer rates in the state of Ohio during the period 1968-1988. ",
+ "neighbors": [
+ 706
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 207,
+ "label": 2,
+ "text": "Title: Feature Extraction Using an Unsupervised Neural Network \nAbstract: A novel unsupervised neural network for dimensionality reduction that seeks directions emphasizing multimodality is presented, and its connection to exploratory projection pursuit methods is discussed. This leads to a new statistical insight into the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principle based solely on distinguishing features is demonstrated using a phoneme recognition experiment. The extracted features are compared with features extracted using a back-propagation network.",
+ "neighbors": [
+ 469,
+ 1202,
+ 1275,
+ 1276,
+ 1279
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 208,
+ "label": 2,
+ "text": "Title: Investigating the Value of a Good Input Representation \nAbstract: This paper is reprinted from Computational Learning Theory and Natural Learning Systems, vol. 3, T. Petsche, S. Judd, and S. Hanson, (eds.), forthcoming 1995. Copyrighted 1995 by MIT Press Abstract The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. A number of factors, including training-set size and the ability of the learning algorithm to perform constructive induction, can mediate the effect of an input representation on the accuracy of a learned concept description. We present experiments that evaluate the effect of input representation on generalization performance for the real-world problem of finding genes in DNA. Our experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for the machine learning subfield of constructive induction because the relationship between the two representations is well known, and because conceptually, the representational shift involved in constructing the better representation should not be too imposing. ",
+ "neighbors": [
+ 81,
+ 405
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 209,
+ "label": 2,
+ "text": "Title: Learning Topology-Preserving Maps Using Self-Supervised Backpropagation \nAbstract: Self-supervised backpropagation is an unsupervised learning procedure for feedforward networks, where the desired output vector is identical with the input vector. For backpropagation, we are able to use powerful simulators running on parallel machines. Topology-preserving maps, on the other hand, can be developed by a variant of the competitive learning procedure. However, in a degenerate case, self-supervised backpropagation is a version of competitive learning. A simple extension of the cost function of backpropagation leads to a competitive version of self-supervised backpropagation, which can be used to produce topographic maps. We demonstrate the approach applied to the Traveling Salesman Problem (TSP). ",
+ "neighbors": [
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 210,
+ "label": 2,
+ "text": "Title: Radial Basis Functions: L p -approximation orders with scattered centres \nAbstract: In this paper we generalize several results on uniform approximation orders with radial basis functions in (Buhmann, Dyn and Levin, 1993) and (Dyn and Ron, 1993) to L p -approximation orders. These results apply, in particular, to approximants from spaces spanned by translates of radial basis functions by scattered centres. Examples to which our results apply include quasi-interpolation and least-squares approximation from radial function spaces.",
+ "neighbors": [
+ 211,
+ 345
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 211,
+ "label": 2,
+ "text": "Title: Radial basis function approximation: from gridded centers to scattered centers \nAbstract: The paper studies L 1 (IR d )-norm approximations from a space spanned by a discrete set of translates of a basis function . Attention here is restricted to functions whose Fourier transform is smooth on IR d n0, and has a singularity at the origin. Examples of such basis functions are the thin-plate splines and the multiquadrics, as well as other types of radial basis functions that are employed in Approximation Theory. The above approximation problem is well-understood in case the set of points ffi used for translating forms a lattice in IR d , and many optimal and quasi-optimal approximation schemes can already be found in the literature. In contrast, only few, mostly specific, results are known for a set ffi of scattered points. The main objective of this paper is to provide a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. We introduce a single, relatively simple, conversion method that preserves the approximation orders provided by a large number of schemes presently in the literature (more precisely, to almost all \"stationary schemes\"). In anticipation of future introduction of new schemes for uniform grids, an effort is made to impose only a few mild conditions on the function , which still allow for a unified error analysis to hold. In the course of the discussion here, the recent results of [BuDL] on scattered center approximation are reproduced and improved upon. ",
+ "neighbors": [
+ 210,
+ 345,
+ 1114,
+ 1300
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 212,
+ "label": 4,
+ "text": "Title: Machine Learning, Explanation-Based Learning and Reinforcement Learning: A Unified View \nAbstract: In speedup-learning problems, where full descriptions of operators are known, both explanation-based learning (EBL) and reinforcement learning (RL) methods can be applied. This paper shows that both methods involve fundamentally the same process of propagating information backward from the goal toward the starting state. Most RL methods perform this propagation on a state-by-state basis, while EBL methods compute the weakest preconditions of operators, and hence, perform this propagation on a region-by-region basis. Barto, Bradtke, and Singh (1995) have observed that many algorithms for reinforcement learning can be viewed as asynchronous dynamic programming. Based on this observation, this paper shows how to develop dynamic programming versions of EBL, which we call region-based dynamic programming or Explanation-Based Reinforcement Learning (EBRL). The paper compares batch and online versions of EBRL to batch and online versions of point-based dynamic programming and to standard EBL. The results show that region-based dynamic programming combines the strengths of EBL (fast learning and the ability to scale to large state spaces) with the strengths of reinforcement learning algorithms (learning of optimal policies). Results are shown in chess endgames and in synthetic maze tasks. ",
+ "neighbors": [
+ 276,
+ 318,
+ 327
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 213,
+ "label": 2,
+ "text": "Title: Some Extensions of the K-Means Algorithm for Image Segmentation and Pattern Classification \nAbstract: In this paper we present some extensions to the k-means algorithm for vector quantization that permit its efficient use in image segmentation and pattern classification tasks. It is shown that by introducing state variables that correspond to certain statistics of the dynamic behavior of the algorithm, it is possible to find the representative centers of the lower dimensional manifolds that define the boundaries between classes, for clouds of multi-dimensional, multi-class data; this permits one, for example, to find class boundaries directly from sparse data (e.g., in image segmentation tasks) or to efficiently place centers for pattern classification (e.g., with local Gaussian classifiers). The same state variables can be used to define algorithms for determining adaptively the optimal number of centers for clouds of data with space-varying density. Some examples of the application of these extensions are also given. This report describes research done within CIMAT (Guanajuato, Mexico), the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041; and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. J.L. Marroquin was supported in part by a grant from the Consejo Nacional de Ciencia y Tecnologia, Mexico. ",
+ "neighbors": [
+ 357,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 214,
+ "label": 4,
+ "text": "Title: Robust Reinforcement Learning in Motion Planning \nAbstract: While exploring to find better solutions, an agent performing online reinforcement learning (RL) can perform worse than is acceptable. In some cases, exploration might have unsafe, or even catastrophic, results, often modeled in terms of reaching `failure' states of the agent's environment. This paper presents a method that uses domain knowledge to reduce the number of failures during exploration. This method formulates the set of actions from which the RL agent composes a control policy to ensure that exploration is conducted in a policy space that excludes most of the unacceptable policies. The resulting action set has a more abstract relationship to the task being solved than is common in many applications of RL. Although the cost of this added safety is that learning may result in a suboptimal solution, we argue that this is an appropriate tradeoff in many problems. We illustrate this method in the domain of motion planning. ",
+ "neighbors": [
+ 318,
+ 324,
+ 507
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 215,
+ "label": 3,
+ "text": "Title: Selecting Input Variables Using Mutual Information and Nonparametric Density Estimation \nAbstract: In learning problems where a connectionist network is trained with a finite sized training set, better generalization performance is often obtained when unneeded weights in the network are eliminated. One source of unneeded weights comes from the inclusion of input variables that provide little information about the output variables. We propose a method for identifying and eliminating these input variables. The method first determines the relationship between input and output variables using nonparametric density estimation and then measures the relevance of input variables using the information theoretic concept of mutual information. We present results from our method on a simple toy problem and a nonlinear time series.",
+ "neighbors": [
+ 50,
+ 86
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 216,
+ "label": 6,
+ "text": "Title: Constructive Induction Using a Non-Greedy Strategy for Feature Selection \nAbstract: We present a method for feature construction and selection that finds a minimal set of conjunctive features that are appropriate to perform the classification task. For problems where this bias is appropriate, the method outperforms other constructive induction algorithms and is able to achieve higher classification accuracy. The application of the method in the search for minimal multi-level boolean expressions is presented and analyzed with the help of some examples.",
+ "neighbors": [
+ 371,
+ 374,
+ 485,
+ 881
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 217,
+ "label": 6,
+ "text": "Title: Mingers, 1989 J. Mingers. An empirical comparison of pruning methods for decision tree induction. Machine\nAbstract: Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. ",
+ "neighbors": [
+ 127,
+ 194,
+ 342,
+ 587,
+ 604,
+ 677,
+ 716,
+ 724,
+ 858,
+ 917,
+ 933,
+ 1189,
+ 1190,
+ 1305
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 218,
+ "label": 1,
+ "text": "Title: Fitness Landscapes and Difficulty in Genetic Programming \nAbstract: The structure of the fitness landscape on which genetic programming operates is examined. The landscapes of a range of problems of known difficulty are analyzed in an attempt to determine which landscape measures correlate with the difficulty of the problem. The autocorrelation of the fitness values of random walks, a measure which has been shown to be related to perceived difficulty using other techniques, is only a weak indicator of the difficulty as perceived by genetic programming. All of these problems show unusually low autocorrelation. Comparison of the range of landscape basin depths at the end of adaptive walks on the landscapes shows good correlation with problem difficulty, over the entire range of problems examined. ",
+ "neighbors": [
+ 91,
+ 106,
+ 542,
+ 707,
+ 821,
+ 822,
+ 960,
+ 978,
+ 1151
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 219,
+ "label": 6,
+ "text": "Title: Learning Decision Lists Using Homogeneous Rules \nAbstract: A decision list is an ordered list of conjunctive rules (?). Inductive algorithms such as AQ and CN2 learn decision lists incrementally, one rule at a time. Such algorithms face the rule overlap problem | the classification accuracy of the decision list depends on the overlap between the learned rules. Thus, even though the rules are learned in isolation, they can only be evaluated in concert. Existing algorithms solve this problem by adopting a greedy, iterative structure. Once a rule is learned, the training examples that match the rule are removed from the training set. We propose a novel solution to the problem: composing decision lists from homogeneous rules, rules whose classification accuracy does not change with their position in the decision list. We prove that the problem of finding a maximally accurate decision list can be reduced to the problem of finding maximally accurate homogeneous rules. We report on the performance of our algorithm on data sets from the UCI repository and on the MONK's problems. ",
+ "neighbors": [
+ 14,
+ 695
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 220,
+ "label": 2,
+ "text": "Title: Constructing Fuzzy Graphs from Examples \nAbstract: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model. ",
+ "neighbors": [
+ 49,
+ 374
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 221,
+ "label": 2,
+ "text": "Title: Hidden Markov Model Analysis of Motifs in Steroid Dehydrogenases and their Homologs \nAbstract: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model. ",
+ "neighbors": [
+ 3
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 222,
+ "label": 2,
+ "text": "Title: Spatial-Temporal Analysis of Temperature Using Smoothing Spline ANOVA \nAbstract: In tasks requiring sustained attention, human alertness varies on a minute time scale. This can have serious consequences in occupations ranging from air traffic control to monitoring of nuclear power plants. Changes in the electroencephalographic (EEG) power spectrum accompany these fluctuations in the level of alertness, as assessed by measuring simultaneous changes in EEG and performance on an auditory monitoring task. By combining power spectrum estimation, principal component analysis and artificial neural networks, we show that continuous, accurate, noninvasive, and near real-time estimation of an operator's global level of alertness is feasible using EEG measures recorded from as few as two central scalp sites. This demonstration could lead to a practical system for noninvasive monitoring of the cognitive state of human operators in attention-critical settings. ",
+ "neighbors": [
+ 246,
+ 1307
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 223,
+ "label": 3,
+ "text": "Title: Robustness Analysis of Bayesian Networks with Finitely Generated Convex Sets of Distributions \nAbstract: This paper presents exact solutions and convergent approximations for inferences in Bayesian networks associated with finitely generated convex sets of distributions. Robust Bayesian inference is the calculation of bounds on posterior values given perturbations in a probabilistic model. The paper presents exact inference algorithms and analyzes the circumstances where exact inference becomes intractable. Two classes of algorithms for numeric approximations are developed through transformations on the original model. The first transformation reduces the robust inference problem to the estimation of probabilistic parameters in a Bayesian network. The second transformation uses Lavine's bracketing algorithm to generate a sequence of maximization problems in a Bayesian network. The analysis is extended to the *-contaminated, the lower density bounded, the belief function, the sub-sigma, the density bounded, the total variation and the density ratio classes of distributions. c fl1996 Carnegie Mellon University",
+ "neighbors": [
+ 189,
+ 192,
+ 336,
+ 1046
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 224,
+ "label": 1,
+ "text": "Title: Evolving Graphs and Networks with Edge Encoding: Preliminary Report \nAbstract: We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress.",
+ "neighbors": [
+ 91,
+ 107,
+ 108
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 225,
+ "label": 3,
+ "text": "Title: Axioms of Causal Relevance \nAbstract: This paper develops axioms and formal semantics for statements of the form \"X is causally irrelevant to Y in context Z,\" which we interpret to mean \"Changing X will not affect Y if we hold Z constant.\" The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in \"Learning X will not alter our belief in Y , once we know Z.\" Two versions of causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems about causal relevance.",
+ "neighbors": [
+ 141,
+ 451
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 226,
+ "label": 2,
+ "text": "Title: Representing and Learning Visual Schemas in Neural Networks for Scene Analysis \nAbstract: Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures. ",
+ "neighbors": [
+ 4,
+ 240
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 227,
+ "label": 3,
+ "text": "Title: Learning Limited Dependence Bayesian Classifiers \nAbstract: We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general extreme. While much work has been carried out along the two ends of this spectrum, there has been surprising little done along the middle. We analyze the assumptions made as one moves along this spectrum and show the tradeoffs between model accuracy and learning speed which become critical to consider in a variety of data mining domains. We then present a general induction algorithm that allows for traversal of this spectrum depending on the available computational power for carrying out induction and show its application in a number of domains with different properties. ",
+ "neighbors": [
+ 336,
+ 369,
+ 1262
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 228,
+ "label": 1,
+ "text": "Title: The Evolutionary Cost of Learning \nAbstract: Traits that are acquired by members of an evolving population during their lifetime, through adaptive processes such as learning, can become genetically specified in later generations. Thus there is a change in the level of learning in the population over evolutionary time. This paper explores the idea that as well as the benefits to be gained from learning, there may also be costs to be paid for the ability to learn. It is these costs that supply the selection pressure for the genetic assimilation of acquired traits. Two models are presented that attempt to illustrate this assertion. The first uses Kauffman's NK fitness landscapes to show the effect that both explicit and implicit costs have on the assimilation of learnt traits. A characteristic `hump' is observed in the graph of the level of plasticity in the population showing that learning is first selected for and then against as evolution progresses. The second model is a practical example in which neural network controllers are evolved for a small mobile robot. Results from this experiment also show the hump. ",
+ "neighbors": [
+ 91,
+ 123,
+ 229,
+ 308
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 229,
+ "label": 1,
+ "text": "Title: Landscapes, Learning Costs and Genetic Assimilation. \nAbstract: The evolution of a population can be guided by phenotypic traits acquired by members of that population during their lifetime. This phenomenon, known as the Baldwin Effect, can speed the evolutionary process as traits that are initially acquired become genetically specified in later generations. This paper presents conditions under which this genetic assimilation can take place. As well as the benefits that lifetime adaptation can give a population, there may be a cost to be paid for that adaptive ability. It is the evolutionary trade-off between these costs and benefits that provides the selection pressure for acquired traits to become genetically specified. It is also noted that genotypic space, in which evolution operates, and phenotypic space, on which adaptive processes (such as learning) operate, are, in general, of a different nature. To guarantee an acquired characteristic can become genetically specified, then these spaces must have the property of neighbourhood correlation which means that a small distance between two individuals in phenotypic space implies that there is a small distance between the same two individuals in genotypic space.",
+ "neighbors": [
+ 228,
+ 1197
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 230,
+ "label": 2,
+ "text": "Title: Finite State Machines and Recurrent Neural Networks Automata and Dynamical Systems Approaches \nAbstract: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree.",
+ "neighbors": [
+ 436,
+ 890
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 231,
+ "label": 2,
+ "text": "Title: Constructing Deterministic Finite-State Automata in Recurrent Neural Networks \nAbstract: Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive algorithm generates a \"programmed\" neural network with O(n) neurons and O(mn) weights. We compare our algorithm to other methods proposed in the literature. ",
+ "neighbors": [
+ 292,
+ 728,
+ 969
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 232,
+ "label": 4,
+ "text": "Title: High-Performance Job-Shop Scheduling With A Time-Delay TD() Network \nAbstract: Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm T D(). A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-T D() network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly outperform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them.",
+ "neighbors": [
+ 1,
+ 45,
+ 177,
+ 327
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 233,
+ "label": 2,
+ "text": "Title: POWER OF NEURAL NETS \nAbstract: Report SYCON-91-11 ABSTRACT This paper deals with the simulation of Turing machines by neural networks. Such networks are made up of interconnections of synchronously evolving processors, each of which updates its state according to a \"sigmoidal\" linear combination of the previous states of all units. The main result states that one may simulate all Turing machines by nets, in linear time. In particular, it is possible to give a net made up of about 1,000 processors which computes a universal partial-recursive function. (This is an update of Report SYCON-91-08; new results include the simulation in linear time of binary-tape machines, as opposed to the unary alphabets used in the previous version.) ",
+ "neighbors": [
+ 292,
+ 307,
+ 819,
+ 1025,
+ 1304,
+ 1309
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 234,
+ "label": 1,
+ "text": "Title: Competitive Environments Evolve Better Solutions for Complex Tasks \nAbstract: University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. ",
+ "neighbors": [
+ 91,
+ 106,
+ 119,
+ 300,
+ 413,
+ 459,
+ 568,
+ 960,
+ 981,
+ 999,
+ 1000,
+ 1337
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 235,
+ "label": 3,
+ "text": "Title: A note on convergence rates of Gibbs sampling for nonparametric mixtures \nAbstract: We consider a mixture model where the mixing distribution is random and is given a Dirichlet process prior. We describe the general structure of two Gibbs sampling algorithms that are useful for approximating Bayesian inferences in this problem. When the kernel f(x j ) of the mixture is bounded, we show that the Markov chains resulting from the Gibbs sampling are uniformly ergodic, and we provide an explicit rate bound. Unfortunately, the bound is not sharp in general; improving sensibly the bound seems however quite difficult.",
+ "neighbors": [
+ 73,
+ 74,
+ 947,
+ 1282
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 236,
+ "label": 6,
+ "text": "Title: Improved Boosting Algorithms Using Confidence-rated Predictions \nAbstract: We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parameter settings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem. One of these leads to a new method for handling the single-label case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally, we give some experimental results comparing a few of the algorithms discussed in this paper. ",
+ "neighbors": [
+ 147
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 237,
+ "label": 1,
+ "text": "Title: Genetic Self-Learning \nAbstract: Evolutionary Algorithms are direct random search algorithms which imitate the principles of natural evolution as a method to solve adaptation (learning) tasks in general. As such they have several features in common which can be observed on the genetic and phenotypic level of living species. In this paper the algorithms' capability of adaptation or learning in a wider sense is demonstrated, and it is focused on Genetic Algorithms to illustrate the learning process on the population level (first level learning), and on Evolution Strategies to demonstrate the learning process on the meta-level of strategy parameters (second level learning).",
+ "neighbors": [
+ 91,
+ 610,
+ 937
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 238,
+ "label": 3,
+ "text": "Title: LEARNING BAYESIAN NETWORKS WITH LOCAL STRUCTURE \nAbstract: We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks. This increases the space of possible models, enabling the representation of CPDs with a variable number of parameters. The resulting learning procedure induces models that better emulate the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures and provide an empirical evaluation of the proposed learning procedure. This evaluation indicates that learning curves characterizing this procedure converge faster, in the number of training instances, than those of the standard procedure, which ignores the local structure of the CPDs. Our results also show that networks learned with local structures tend to be more complex (in terms of arcs), yet require fewer parameters. ",
+ "neighbors": [
+ 34,
+ 321,
+ 724,
+ 1045,
+ 1246
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 239,
+ "label": 5,
+ "text": "Title: Rule Induction with CN2: Some Recent Improvements \nAbstract: The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. ",
+ "neighbors": [
+ 14,
+ 185,
+ 194,
+ 485,
+ 604,
+ 716,
+ 828,
+ 881,
+ 1120,
+ 1224,
+ 1251
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 240,
+ "label": 2,
+ "text": "Title: Book Review Introduction to the Theory of Neural Computation Reviewed by: 2 \nAbstract: Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models ",
+ "neighbors": [
+ 6,
+ 79,
+ 113,
+ 115,
+ 129,
+ 143,
+ 176,
+ 181,
+ 201,
+ 226,
+ 272,
+ 282,
+ 341,
+ 343,
+ 404,
+ 405,
+ 534,
+ 547,
+ 625,
+ 715,
+ 720,
+ 721,
+ 737,
+ 752,
+ 830,
+ 1062,
+ 1079,
+ 1194,
+ 1255
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 241,
+ "label": 3,
+ "text": "Title: Classifiers: A Theoretical and Empirical Study \nAbstract: This paper describes how a competitive tree learning algorithm can be derived from first principles. The algorithm approximates the Bayesian decision theoretic solution to the learning task. Comparative experiments with the algorithm and the several mature AI and statistical families of tree learning algorithms currently in use show the derived Bayesian algorithm is consistently as good or better, although sometimes at computational cost. Using the same strategy, we can design algorithms for many other supervised and model learning tasks given just a probabilistic representation for the kind of knowledge to be learned. As an illustration, a second learning algorithm is derived for learning Bayesian networks from data. Implications to incremental learning and the use of multiple models are also discussed.",
+ "neighbors": [
+ 14,
+ 724,
+ 842
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 242,
+ "label": 6,
+ "text": "Title: Irrelevant Features and the Subset Selection Problem \nAbstract: We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.",
+ "neighbors": [
+ 98,
+ 118,
+ 133,
+ 148,
+ 301,
+ 369,
+ 371,
+ 380,
+ 583,
+ 677,
+ 712,
+ 721,
+ 875,
+ 904,
+ 1086,
+ 1152,
+ 1211
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 243,
+ "label": 5,
+ "text": "Title: Limited Dual Path Execution \nAbstract: This work presents a hybrid branch predictor scheme that uses a limited form of dual path execution along with dynamic branch prediction to improve execution times. The ability to execute down both paths of a conditional branch enables the branch penalty to be minimized; however, relying exclusively on dual path execution is infeasible due because instruction fetch rates far exceed the capability of the pipeline to retire a single branch before others must be processed. By using confidence information, available in the dynamic branch prediction state tables, a limited form of dual path execution becomes feasible. This reduces the burden on the branch predictor by allowing predictions of low confidence to be avoided. In this study we present a new approach to gather branch prediction confidence with little or no overhead, and use this confidence mechanism to determine whether dual path execution or branch prediction should be used. Comparing this hybrid predictor model to the dynamic branch predictor shows a dramatic decrease in misprediction rate, which translates to an reduction in runtime of over 20%. These results imply that dual path execution, which often is thought to be an excessively resource consuming method, may be a worthy approach if restricted with an appropriate predicting set. ",
+ "neighbors": [
+ 103,
+ 175,
+ 349
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 244,
+ "label": 2,
+ "text": "Title: Homology Detection via Family Pairwise Search a straightforward generalization of pairwise sequence comparison algorithms to\nAbstract: The function of an unknown biological sequence can often be accurately inferred by identifying sequences homologous to the original sequence. Given a query set of known homologs, there exist at least three general classes of techniques for finding additional homologs: pairwise sequence comparisons, motif analysis, and hidden Markov modeling. Pairwise sequence comparisons are typically employed when only a single query sequence is known. Hidden Markov models (HMMs), on the other hand, are usually trained with sets of more than 100 sequences. Motif-based methods fall in between these two extremes. ",
+ "neighbors": [
+ 0,
+ 3,
+ 150
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 245,
+ "label": 6,
+ "text": "Title: A System for Induction of Oblique Decision Trees \nAbstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.",
+ "neighbors": [
+ 9,
+ 44,
+ 127,
+ 317,
+ 354,
+ 360
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 246,
+ "label": 2,
+ "text": "Title: Adaptive tuning of numerical weather prediction models: Randomized GCV in three and four dimensional data assimilation \nAbstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.",
+ "neighbors": [
+ 53,
+ 222
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 247,
+ "label": 0,
+ "text": "Title: BRACE: A Paradigm For the Discretization of Continuously Valued Data \nAbstract: Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning. ",
+ "neighbors": [
+ 171,
+ 374
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 248,
+ "label": 2,
+ "text": "Title: Parameterization studies for the SAM and HMMER methods of hidden Markov model generation \nAbstract: Multiple sequence alignment of distantly related viral proteins remains a challenge to all currently available alignment methods. The hidden Markov model approach offers a new, flexible method for the generation of multiple sequence alignments. The results of studies attempting to infer appropriate parameter constraints for the generation of de novo HMMs for globin, kinase, aspartic acid protease, and ribonuclease H sequences by both the SAM and HMMER methods are described. ",
+ "neighbors": [
+ 3
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 249,
+ "label": 2,
+ "text": "Title: Fools Gold: Extracting Finite State Machines From Recurrent Network Dynamics \nAbstract: Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network, the next step is to understand the information processing carried out by the network. Some researchers (Giles et al., 1992; Watrous & Kuhn, 1992; Cleeremans et al., 1989) have resorted to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes two conditions, sensitivity to initial conditions and frivolous computational explanations due to discrete measurements (Kolen & Pollack, 1993), which allow these extraction methods to return illusionary finite state descriptions.",
+ "neighbors": [
+ 77,
+ 436
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 250,
+ "label": 0,
+ "text": "Title: Bias and the Probability of Generalization \nAbstract: In order to be useful, a learning algorithm must be able to generalize well when faced with inputs not previously presented to the system. A bias is necessary for any generalization, and as shown by several researchers in recent years, no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. This paper provides examples to illustrate this fact, but also explains how a bias or learning algorithm can be better than another in practice when the probability of the occurrence of functions is taken into account. It shows how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization, and identifies several of the conditions that should be considered when attempting to select an appropriate bias for a particular problem. ",
+ "neighbors": [
+ 185,
+ 401
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 251,
+ "label": 2,
+ "text": "Title: A Smooth Converse Lyapunov Theorem for Robust Stability \nAbstract: This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known classical theorems. In a unified and natural manner, it (1) allows arbitrary bounded time-varying parameters in the system description, (2) deals with global asymptotic stability, (3) results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. 1. Introduction. This work is motivated by problems of robust nonlinear stabilization. One of our main ",
+ "neighbors": [
+ 820
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 252,
+ "label": 0,
+ "text": "Title: Correcting Imperfect Domain Theories: A Knowledge-Level Analysis \nAbstract: Explanation-Based Learning [Mitchell et al., 1986; DeJong and Mooney, 1986] has shown promise as a powerful analytical learning technique. However, EBL is severely hampered by the requirement of a complete and correct domain theory for successful learning to occur. Clearly, in non-trivial domains, developing such a domain theory is a nearly impossible task. Therefore, much research has been devoted to understanding how an imperfect domain theory can be corrected and extended during system performance. In this paper, we present a characterization of this problem, and use it to analyze past research in the area. Past characterizations of the problem (e.g, [Mitchell et al., 1986; Rajamoney and DeJong, 1987]) have viewed the types of performance errors caused by a faulty domain theory as primary. In contrast, we focus primarily on the types of knowledge deficiencies present in the theory, and from these derive the types of performance errors that can result. Correcting the theory can be viewed as a search through the space of possible domain theories, with a variety of knowledge sources that can be used to guide the search. We examine the types of knowledge used by a variety of past systems for this purpose. The hope is that this analysis will indicate the need for a \"universal weak method\" of domain theory correction, in which different sources of knowledge for theory correction can be freely and flexibly combined. ",
+ "neighbors": [
+ 328,
+ 374,
+ 858
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 253,
+ "label": 4,
+ "text": "Title: Parameterized Heuristics for Intelligent Adaptive Network Routing in Large Communication Networks \nAbstract: Parameterized heuristics offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. This paper describes a framework for heuristic routing in large networks. The effectiveness of the heuristic routing mechanism upon which Quo Vadis is based is described as part of a simulation study within a network with grid topology. A formal analysis of the underlying principles is presented through the incremental design of a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. This paper carefully derives the properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research.",
+ "neighbors": [
+ 318
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 254,
+ "label": 3,
+ "text": "Title: Principal Curve Clustering With Noise \nAbstract: Technical Report 317 Department of Statistics University of Washington. 1 Derek Stanford is Graduate Research Assistant and Adrian E. Raftery is Professor of Statistics and Sociology, both at the Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, USA. E-mail: stanford@stat.washington.edu and raftery@stat.washington.edu. Web: http://www.stat.washington.edu/raftery. This research was supported by ONR grants N00014-96-1-0192 and N00014-96-1-0330. The authors are grateful to Simon Byers, Gilles Celeux and Christian Posse for helpful discussions. ",
+ "neighbors": [
+ 67,
+ 199,
+ 293
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 255,
+ "label": 6,
+ "text": "Title: How to Use Expert Advice (Extended Abstract) \nAbstract: We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We give implications of this result on the performance of batch learning algorithms in a PAC setting which improve on the best results currently known in this context. We also extend our analysis to the case in which log loss is used instead of the expected number of mistakes. ",
+ "neighbors": [
+ 2,
+ 294,
+ 316,
+ 346,
+ 409,
+ 508,
+ 586,
+ 764,
+ 804,
+ 927,
+ 946,
+ 1087,
+ 1109,
+ 1131,
+ 1215,
+ 1256,
+ 1321
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 256,
+ "label": 0,
+ "text": "Title: Towards Formalizations in Case-Based Reasoning for Synthesis \nAbstract: This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr. ",
+ "neighbors": [
+ 102,
+ 309,
+ 311,
+ 770
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 257,
+ "label": 6,
+ "text": "Title: Boosting a weak learning algorithm by majority To be published in Information and Computation \nAbstract: We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper \"The strength of weak learnability\", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances. ",
+ "neighbors": [
+ 316,
+ 330,
+ 392,
+ 869,
+ 964,
+ 1256,
+ 1333
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 258,
+ "label": 0,
+ "text": "Title: A Computational Model of Ratio Decidendi \nAbstract: This paper proposes a model of ratio decidendi as a justification structure consisting of a series of reasoning steps, some of which relate abstract predicates to other abstract predicates and some of which relate abstract predicates to specific facts. This model satisfies an important set of characteristics of ratio decidendi identified from the jurisprudential literature. In particular, the model shows how the theory under which a case is decided controls its precedential effect. By contrast, a purely exemplar-based model of ratio decidendi fails to account for the dependency of prece-dential effect on the theory of decision. ",
+ "neighbors": [
+ 378
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 259,
+ "label": 6,
+ "text": "Title: Pac Learning, Noise, and Geometry \nAbstract: This paper describes the probably approximately correct model of concept learning, paying special attention to the case where instances are points in Euclidean n-space. The problem of learning from noisy training data is also studied. ",
+ "neighbors": [
+ 62,
+ 155
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 260,
+ "label": 4,
+ "text": "Title: Learning Roles: Behavioral Diversity in Robot Teams \nAbstract: This paper describes research investigating behavioral specialization in learning robot teams. Each agent is provided a common set of skills (motor schema-based behavioral assemblages) from which it builds a task-achieving strategy using reinforcement learning. The agents learn individually to activate particular behavioral assemblages given their current situation and a reward signal. The experiments, conducted in robot soccer simulations, evaluate the agents in terms of performance, policy convergence, and behavioral diversity. The results show that in many cases, robots will automatically diversify by choosing heterogeneous behaviors. The degree of diversification and the performance of the team depend on the reward structure. When the entire team is jointly rewarded or penalized (global reinforcement), teams tend towards heterogeneous behavior. When agents are provided feedback individually (local reinforcement), they converge to identical policies. ",
+ "neighbors": [
+ 80,
+ 163
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 261,
+ "label": 2,
+ "text": "Title: Draft Symbolic Representation of Neural Networks \nAbstract: An early and shorter version of this paper has been accepted for presenta tion at IJCAI'95. ",
+ "neighbors": [
+ 105,
+ 917,
+ 1304
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 262,
+ "label": 4,
+ "text": "Title: A Cognitive Model of Learning to Navigate \nAbstract: Our goal is to develop a cognitive model of how humans acquire skills on complex cognitive tasks. We are pursuing this goal by designing computational architectures for the NRL Navigation task, which requires competent sensorimotor coordination. In this paper, we analyze the NRL Navigation task in depth. We then use data from experiments with human subjects learning this task to guide us in constructing a cognitive model of skill acquisition for the task. Verbal protocol data augments the black box view provided by execution traces of inputs and outputs. Computational experiments allow us to explore a space of alternative architectures for the task, guided by the quality of fit to human performance data. ",
+ "neighbors": [
+ 193,
+ 272,
+ 276,
+ 326
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 263,
+ "label": 4,
+ "text": "Title: Strategy Learning with Multilayer Connectionist Representations 1 \nAbstract: Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. ",
+ "neighbors": [
+ 48,
+ 59,
+ 169,
+ 264,
+ 300,
+ 327,
+ 328,
+ 1081
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 264,
+ "label": 4,
+ "text": "Title: On the Computational Economics of Reinforcement Learning \nAbstract: Following terminology used in adaptive control, we distinguish between indirect learning methods, which learn explicit models of the dynamic structure of the system to be controlled, and direct learning methods, which do not. We compare an existing indirect method, which uses a conventional dynamic programming algorithm, with a closely related direct reinforcement learning method by applying both methods to an infinite horizon Markov decision problem with unknown state-transition probabilities. The simulations show that although the direct method requires much less space and dramatically less computation per control action, its learning ability in this task is superior to, or compares favorably with, that of the more complex indirect method. Although these results do not address how the methods' performances compare as problems become more difficult, they suggest that given a fixed amount of computational power available per control action, it may be better to use a direct reinforcement learning method augmented with indirect techniques than to devote all available resources to a computation-ally costly indirect method. Comprehensive answers to the questions raised by this study depend on many factors making up the eco nomic context of the computation.",
+ "neighbors": [
+ 5,
+ 169,
+ 263,
+ 318,
+ 327,
+ 328
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 265,
+ "label": 3,
+ "text": "Title: A Knowledge-Based Framework for Belief Change Part I: Foundations \nAbstract: We propose a general framework in which to study belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes ' if he knows that ' is true in all the worlds he considers most plausible. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin [HF89] for modeling knowledge in multi-agent systems. We show that our framework is quite expressive and lets us model in a natural way a number of different scenarios for belief change. For example, we show how we can capture an analogue to prior probabilities, which can be updated by \"conditioning\". In a related paper, we show how the two best studied scenarios, belief revision and belief update, fit into the framework. ",
+ "neighbors": [
+ 157,
+ 160,
+ 196,
+ 1072,
+ 1077
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 266,
+ "label": 3,
+ "text": "Title: Adaptive Markov Chain Monte Carlo through Regeneration Summary \nAbstract: Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution . This is done by calculating averages over the sample path of a Markov chain having as its stationary distribution. For computational efficiency, the Markov chain should be rapidly mixing. This can sometimes be achieved only by careful design of the transition kernel of the chain, on the basis of a detailed preliminary exploratory analysis of . An alternative approach might be to allow the transition kernel to adapt whenever new features of are encountered during the MCMC run. However, if such adaptation occurs infinitely often, the stationary distribution of the chain may be disturbed. We describe a framework, based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often, but which does not disturb the stationary distribution of the chain or the consistency of sample-path averages. Key Words: Adaptive method; Bayesian inference; Gibbs sampling; Markov chain Monte Carlo; ",
+ "neighbors": [
+ 101,
+ 281,
+ 520,
+ 947,
+ 1228
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 267,
+ "label": 2,
+ "text": "Title: Interpolation Models with Multiple \nAbstract: A traditional interpolation model is characterized by the choice of reg-ularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant ff, and the noise model has a single parameter fi. The ratio ff=fi alone is responsible for determining globally all these attributes of the interpolant: its `complexity', `flexibility', `smoothness', `characteristic scale length', and `characteristic amplitude'. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of `conditional convexity' when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error. ",
+ "neighbors": [
+ 43,
+ 89,
+ 122
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 268,
+ "label": 0,
+ "text": "Title: What Daimler-Benz has learned as an industrial partner from the Machine Learning Project StatLog \nAbstract: Author of this paper was co-ordinator of the Machine Learning project StatLog during 1990-1993. This project was supported financially by the European Community. The main aim of StatLog was to evaluate different learning algorithms using real industrial and commercial applications. As an industrial partner and contributor, Daimler-Benz has introduced different applications to Stat-Log among them fault diagnosis, letter and digit recognition, credit-scoring and prediction of the number of registered trucks. We have learned a lot of lessons from this project which have effected our application oriented research in the field of Machine Learning (ML) in Daimler-Benz. We have distinguished that, especially, more research is necessary to prepare the ML-algorithms to handle the real industrial and commercial applications. In this paper we describe, shortly, the Daimler-Benz applications in StatLog, we discuss shortcomings of the applied ML-algorithms and finally we outline the fields where we think further research is necessary. ",
+ "neighbors": [
+ 273
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 269,
+ "label": 4,
+ "text": "Title: In Improving Elevator Performance Using Reinforcement Learning \nAbstract: This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.",
+ "neighbors": [
+ 1,
+ 59,
+ 170,
+ 362,
+ 1007
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 270,
+ "label": 4,
+ "text": "Title: Category: Control, Navigation and Planning. Key words: Reinforcement learning, Exploration, Hidden state. Prefer oral presentation.\nAbstract: This paper presents Fringe Exploration, a technique for efficient exploration in partially observable domains. The key idea, (applicable to many exploration techniques), is to keep statistics in the space of possible short-term memories, instead of in the agent's current state space. Experimental results in a partially observable maze and in a difficult driving task with visual routines show dramatic performance improvements.",
+ "neighbors": [
+ 318,
+ 328,
+ 379
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 271,
+ "label": 2,
+ "text": "Title: Protein Structure Prediction: Selecting Salient Features from Large Candidate Pools \nAbstract: We introduce a parallel approach, \"DT-Select,\" for selecting features used by inductive learning algorithms to predict protein secondary structure. DT-Select is able to rapidly choose small, nonredundant feature sets from pools containing hundreds of thousands of potentially useful features. It does this by building a decision tree, using features from the pool, that classifies a set of training examples. The features included in the tree provide a compact description of the training data and are thus suitable for use as inputs to other inductive learning algorithms. Empirical experiments in the protein secondary-structure task, in which sets of complex features chosen by DT-Select are used to augment a standard artificial neural network representation, yield surprisingly little performance gain, even though features are selected from very large feature pools. We discuss some possible reasons for this result. 1 ",
+ "neighbors": [
+ 371,
+ 405
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 272,
+ "label": 4,
+ "text": "Title: Forward models: Supervised learning with a distal teacher \nAbstract: Internal models of the environment have an important role to play in adaptive systems in general and are of particular importance for the supervised learning paradigm. In this paper we demonstrate that certain classical problems associated with the notion of the \"teacher\" in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system. In particular, we show how supervised learning algorithms can be utilized in cases in which an unknown dynamical system intervenes between actions and desired outcomes. Our approach applies to any supervised learning algorithm that is capable of learning in multi-layer networks. *This paper is a revised version of MIT Center for Cognitive Science Occasional Paper #40. We wish to thank Michael Mozer, Andrew Barto, Robert Jacobs, Eric Loeb, and James McClelland for helpful comments on the manuscript. This project was supported in part by BRSG 2 S07 RR07047-23 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, National Institutes of Health, by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by a grant from the Human Frontier Science Program, and by grant N00014-90-J-1942 awarded by the Office of Naval Research. ",
+ "neighbors": [
+ 169,
+ 193,
+ 240,
+ 262,
+ 327,
+ 328,
+ 430,
+ 918
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 273,
+ "label": 0,
+ "text": "Title: An Improved Algorithm for Incremental Induction of Decision Trees \nAbstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. ",
+ "neighbors": [
+ 127,
+ 159,
+ 268,
+ 284,
+ 300,
+ 327
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 274,
+ "label": 2,
+ "text": "Title: Modelling the Manifolds of Images of Handwritten Digits \nAbstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. ",
+ "neighbors": [
+ 149,
+ 387,
+ 1181,
+ 1298
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 275,
+ "label": 6,
+ "text": "Title: The Weighted Majority Algorithm \nAbstract: fl This research was primarily conducted while this author was at the University of Calif. at Santa Cruz with support from ONR grant N00014-86-K-0454, and at Harvard University, supported by ONR grant N00014-85-K-0445 and DARPA grant AFOSR-89-0506. Current address: NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. E-mail address: nickl@research.nj.nec.com. y Supported by ONR grants N00014-86-K-0454 and N00014-91-J-1162. Part of this research was done while this author was on sabbatical at Aiken Computation Laboratory, Harvard, with partial support from the ONR grants N00014-85-K-0445 and N00014-86-K-0454. Address: Department of Computer Science, University of California at Santa Cruz. E-mail address: manfred@cs.ucsc.edu. ",
+ "neighbors": [
+ 2
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 276,
+ "label": 4,
+ "text": "Title: The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces \nAbstract: Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniques from game-theory and computational geometry to efficiently and adaptively concentrate high resolution only on critical areas. The current version of the algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces. Future versions will be designed to find a solution that optimizes a real-valued criterion. Many simulated problems have been tested, ranging from two-dimensional to nine-dimensional state-spaces, including mazes, path planning, non-linear dynamics, and planar snake robots in restricted spaces. In all cases, a good solution is found in less than ten trials and a few minutes. ",
+ "neighbors": [
+ 161,
+ 169,
+ 212,
+ 262,
+ 318,
+ 328,
+ 379,
+ 434
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 277,
+ "label": 3,
+ "text": "Title: Comparing Predictive Inference Methods for Discrete Domains \nAbstract: Predictive inference is seen here as the process of determining the predictive distribution of a discrete variable, given a data set of training examples and the values for the other problem domain variables. We consider three approaches for computing this predictive distribution, and assume that the joint probability distribution for the variables belongs to a set of distributions determined by a set of parametric models. In the simplest case, the predictive distribution is computed by using the model with the maximum a posteriori (MAP) posterior probability. In the evidence approach, the predictive distribution is obtained by averaging over all the individual models in the model family. In the third case, we define the predictive distribution by using Rissanen's new definition of stochastic complexity. Our experiments performed with the family of Naive Bayes models suggest that when using all the data available, the stochastic complexity approach produces the most accurate predictions in the log-score sense. However, when the amount of available training data is decreased, the evidence approach clearly outperforms the two other approaches. The MAP predictive distribution is clearly inferior in the log-score sense to the two more sophisticated approaches, but for the 0/1-score the MAP approach may still in some cases produce the best results. ",
+ "neighbors": [
+ 879
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 278,
+ "label": 0,
+ "text": "Title: CASE-BASED CREATIVE DESIGN \nAbstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. ",
+ "neighbors": [
+ 35,
+ 130,
+ 649,
+ 718,
+ 893
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 279,
+ "label": 2,
+ "text": "Title: Language as a dynamical system \nAbstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. ",
+ "neighbors": [
+ 308
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 280,
+ "label": 6,
+ "text": "Title: Prediction, Learning, Uniform Convergence, and Scale-sensitive Dimensions \nAbstract: We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each * > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within *, and to be an *-uniform Glivenko-Cantelli class. ",
+ "neighbors": [
+ 62,
+ 316,
+ 346
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 281,
+ "label": 3,
+ "text": "Title: Self Regenerative Markov Chain Monte Carlo Summary \nAbstract: We propose a new method of construction of Markov chains with a given stationary distribution . This method is based on construction of an auxiliary chain with some other stationary distribution and picking elements of this auxiliary chain a suitable number of times. The proposed method has many advantages over its rivals. It is easy to implement; it provides a simple analysis; it can be faster and more efficient than the currently available techniques and it can also be adapted during the course of the simulation. We make theoretical and numerical comparisons of the characteristics of the proposed algorithm with some other MCMC techniques. ",
+ "neighbors": [
+ 101,
+ 266,
+ 1200,
+ 1228
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 282,
+ "label": 1,
+ "text": "Title: Parallel Search for Neural Network Under the guidance of \nAbstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. ",
+ "neighbors": [
+ 240
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 283,
+ "label": 2,
+ "text": "Title: BRAINSTRUCTURED CONNECTIONIST NETWORKS THAT PERCEIVE AND LEARN \nAbstract: This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. ",
+ "neighbors": [
+ 286,
+ 386,
+ 1029,
+ 1235
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 284,
+ "label": 6,
+ "text": "Title: Decision Tree Induction Based on Efficient Tree Restructuring \nAbstract: The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications. ",
+ "neighbors": [
+ 273,
+ 1210
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 285,
+ "label": 4,
+ "text": "Title: 2-D Pole Balancing with Recurrent Evolutionary Networks \nAbstract: The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. In this paper we present a more difficult version to the classic problem where the cart and pole can move in a plane. We demonstrate a neuroevolution system (Enforced Sub-Populations, or ESP) that can solve this difficult problem without velocity information.",
+ "neighbors": [
+ 140,
+ 325
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 286,
+ "label": 2,
+ "text": "Title: Some Biases for Efficient Learning of Spatial, Temporal, and Spatio-Temporal Patterns \nAbstract: This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. ",
+ "neighbors": [
+ 96,
+ 283,
+ 288,
+ 386
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 287,
+ "label": 4,
+ "text": "Title: Fast Online Q() \nAbstract: Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed. ",
+ "neighbors": [
+ 169,
+ 327,
+ 329,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 288,
+ "label": 2,
+ "text": "Title: Generative Learning Structures and Processes for Generalized Connectionist Networks \nAbstract: Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined. ",
+ "neighbors": [
+ 96,
+ 286,
+ 991,
+ 1051,
+ 1083,
+ 1236
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 289,
+ "label": 6,
+ "text": "Title: Machine Learning by Function Decomposition \nAbstract: We present a new machine learning method that, given a set of training examples, induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This effectively decomposes the problem into smaller, less complex problems. The method is inspired by the Boolean function decomposition approach to the design of digital circuits. To cope with high time complexity of finding an optimal decomposition, we propose a suboptimal heuristic algorithm. The method, implemented in program HINT (HIerarchy Induction Tool), is experimentally evaluated using a set of artificial and real-world learning problems. It is shown that the method performs well both in terms of classification accuracy and discovery of meaningful concept hierarchies.",
+ "neighbors": [
+ 300
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 290,
+ "label": 5,
+ "text": "Title: The Bayesian Approach to Tree-Structured Regression \nAbstract: In the context of inductive learning, the Bayesian approach turned out to be very successful in estimating probabilities of events when there are only a few learning examples. The m-probability estimate was developed to handle such situations. In this paper we present the m-distribution estimate, an extension to the m-probability estimate which, besides the estimation of probabilities, covers also the estimation of probability distributions. We focus on its application in the construction of regression trees. The theoretical results were incorporated into a system for automatic induction of regression trees. The results of applying the upgraded system to several domains are presented and compared to previous results. ",
+ "neighbors": [
+ 183,
+ 389
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 291,
+ "label": 3,
+ "text": "Title: The Bayesian Approach to Tree-Structured Regression \nAbstract: TECHNICAL REPORT NO. 967 August 1996 ",
+ "neighbors": [
+ 204,
+ 298
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 292,
+ "label": 2,
+ "text": "Title: Fault-Tolerant Implementation of Finite-State Automata in Recurrent Neural Networks \nAbstract: Recently, we have proven that the dynamics of any deterministic finite-state automata (DFA) with n states and m input symbols can be implemented in a sparse second-order recurrent neural network (SORNN) with n + 1 state neurons and O(mn) second-order weights and sigmoidal discriminant functions [5]. We investigate how that constructive algorithm can be extended to fault-tolerant neural DFA implementations where faults in an analog implementation of neurons or weights do not affect the desired network performance. We show that tolerance to weight perturbation can be achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however, requires duplication of the network resources. This result has an impact on the construction of neural DFAs with a dense internal representation of DFA states.",
+ "neighbors": [
+ 231,
+ 233
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 293,
+ "label": 3,
+ "text": "Title: Detecting Features in Spatial Point Processes with Clutter via Model-Based Clustering \nAbstract: Technical Report No. 295 Department of Statistics, University of Washington October, 1995 1 Abhijit Dasgupta is a graduate student at the Department of Biostatistics, University of Washington, Box 357232, Seattle, WA 98195-7232, and his e-mail address is dasgupta@biostat.washington.edu. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, and his e-mail address is raftery@stat.washington.edu. This research was supported by Office of Naval Research Grant no. N-00014-91-J-1074. The authors are grateful to Peter Guttorp, Girardeau Henderson and Robert Muise for helpful discussions. ",
+ "neighbors": [
+ 67,
+ 85,
+ 254
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 294,
+ "label": 6,
+ "text": "Title: Gambling in a rigged casino: The adversarial multi-armed bandit problem \nAbstract: In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T 1=2 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also prove a general matching lower bound on the best possible performance of any algorithm in our setting. In addition, we consider a setting in which the player has a team of experts advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.",
+ "neighbors": [
+ 255,
+ 330
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 295,
+ "label": 3,
+ "text": "Title: Sensitivities: An Alternative to Conditional Probabilities for Bayesian Belief Networks \nAbstract: We show an alternative way of representing a Bayesian belief network by sensitivities and probability distributions. This representation is equivalent to the traditional representation by conditional probabilities, but makes dependencies between nodes apparent and intuitively easy to understand. We also propose a QR matrix representation for the sensitivities and/or conditional probabilities which is more efficient, in both memory requirements and computational speed, than the traditional representation for computer-based implementations of probabilistic inference. We use sensitivities to show that for a certain class of binary networks, the computation time for approximate probabilistic inference with any positive upper bound on the error of the result is independent of the size of the network. Finally, as an alternative to traditional algorithms that use conditional probabilities, we describe an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neigh bors.",
+ "neighbors": [
+ 373,
+ 1137
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 296,
+ "label": 2,
+ "text": "Title: A Supercomputer for Neural Computation \nAbstract: The requirement to train large neural networks quickly has prompted the design of a new massively parallel supercomputer using custom VLSI. This design features 128 processing nodes, communicating over a mesh network connected directly to the processor chip. Studies show peak performance in the range of 160 billion arithmetic operations per second. This paper presents the case for custom hardware that combines neural network-specific features with a general programmable machine architecture, and briefly describes the design in progress. ",
+ "neighbors": [
+ 158
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 297,
+ "label": 6,
+ "text": "Title: Active Learning with Committees for Text Categorization \nAbstract: In many real-world domains like text categorization, supervised learning requires a large number of training examples. In this paper we describe an active learning method that uses a committee of learners to reduce the number of training examples required for learning. Our approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. Our experiments in text categorization using a committee of Winnow-based learners demonstrate that this approach can reduce the number of labeled training examples required over that used by a single Winnow learner by 1-2 orders of magnitude. This paper is not under review or accepted for publication in another conference or journal. Acknowledgements: The availability of the Reuters-22173 corpus [Reuters] and of the | STAT Data Manipulation and Analysis Programs [Perlman] has greatly assisted in our research to date. ",
+ "neighbors": [
+ 659,
+ 1281
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 298,
+ "label": 2,
+ "text": "Title: Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Diabetic\nAbstract: In this paper I give a review of ensemble learning using a simple example. ",
+ "neighbors": [
+ 109,
+ 162,
+ 291,
+ 408,
+ 1307
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 299,
+ "label": 5,
+ "text": "Title: Covering vs. Divide-and-Conquer for Top-Down Induction of Logic Programs \nAbstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions.",
+ "neighbors": [
+ 198,
+ 374,
+ 615,
+ 616,
+ 708,
+ 1199
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 300,
+ "label": 1,
+ "text": "Title: Some studies in machine learning using the game of checkers. IBM Journal, 3(3):211-229, 1959. Some\nAbstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions.",
+ "neighbors": [
+ 91,
+ 106,
+ 161,
+ 234,
+ 263,
+ 273,
+ 289,
+ 310,
+ 327,
+ 416,
+ 511,
+ 529,
+ 680,
+ 939,
+ 981,
+ 1038,
+ 1043,
+ 1253,
+ 1267,
+ 1312
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 301,
+ "label": 2,
+ "text": "Title: An Inductive Learning Approach to Prognostic Prediction \nAbstract: This paper introduces the Recurrence Surface Approximation, an inductive learning method based on linear programming that predicts recurrence times using censored training examples, that is, examples in which the available training output may be only a lower bound on the \"right answer.\" This approach is augmented with a feature selection method that chooses an appropriate feature set within the context of the linear programming generalizer. Computational results in the field of breast cancer prognosis are shown. A straightforward translation of the prediction method to an artificial neural network model is also proposed.",
+ "neighbors": [
+ 242,
+ 658
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 302,
+ "label": 6,
+ "text": "Title: MML mixture modelling of multi-state, Poisson, von Mises circular and Gaussian distributions \nAbstract: Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture mod-elling program, Snob (Wallace and Boulton (1968), Wal-lace (1986), Wallace and Dowe(1994)) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multi-variate data from Gaussian, discrete multi-state, Poisson and von Mises circular distributions. ",
+ "neighbors": [
+ 397,
+ 795
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 303,
+ "label": 2,
+ "text": "Title: VISIT: An Efficient Computational Model of Human Visual Attention \nAbstract: One of the challenges for models of cognitive phenomena is the development of efficient and exible interfaces between low level sensory information and high level processes. For visual processing, researchers have long argued that an attentional mechanism is required to perform many of the tasks required by high level vision. This thesis presents VISIT, a connectionist model of covert visual attention that has been used as a vehicle for studying this interface. The model is efficient, exible, and is biologically plausible. The complexity of the network is linear in the number of pixels. Effective parallel strategies are used to minimize the number of iterations required. The resulting system is able to efficiently solve two tasks that are particularly difficult for standard bottom-up models of vision: computing spatial relations and visual search. Simulations show that the networks behavior matches much of the known psychophysical data on human visual attention. The general architecture of the model also closely matches the known physiological data on the human attention system. Various extensions to VISIT are discussed, including methods for learning the component modules. ",
+ "neighbors": [
+ 432,
+ 924,
+ 1058,
+ 1336
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 304,
+ "label": 2,
+ "text": "Title: Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks \nAbstract: A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[1, 2]. This mathematical result helped to establish the paradigm of neural computation with fixed point attractors[3]. But in reality, interactions between neurons in the brain are asymmetric. Furthermore, the dynamical behaviors seen in the brain are not confined to fixed point attractors, but also include oscillations and complex nonperiodic behavior. These other types of dynamics can be realized by asymmetric networks, and may be useful for neural computation. For these reasons, it is important to understand the global behavior of asymmetric neural networks. The interaction between an excitatory neuron and an inhibitory neuron is clearly asymmetric. Here we consider a class of networks that incorporates this fundamental asymmetry of the brain's microcircuitry. Networks of this class have distinct populations of excitatory and inhibitory neurons, with antisymmetric interactions minimax and dissipative Hamiltonian forms of the network dynamics.",
+ "neighbors": [
+ 394
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 305,
+ "label": 2,
+ "text": "Title: FEEDBACK STABILIZATION OF NONLINEAR SYSTEMS \nAbstract: This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. ",
+ "neighbors": [
+ 403,
+ 831,
+ 1149
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 306,
+ "label": 6,
+ "text": "Title: Sequential PAC Learning \nAbstract: We consider the use of \"on-line\" stopping rules to reduce the number of training examples needed to pac-learn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples one-at-a-time and decide \"on-line\" whether to stop and return a hypothesis, or continue training. The primary benefit of this approach is that we can detect when a hypothesizer has actually \"converged,\" and halt training before the standard fixed-sample-size bounds. This paper presents a series of such sequential learning procedures for: distribution-free pac-learning, \"mistake-bounded to pac\" conversion, and distribution-specific pac-learning, respectively. We analyze the worst case expected training sample size of these procedures, and show that this is often smaller than existing fixed sample size bounds | while providing the exact same worst case pac-guarantees. We also provide lower bounds that show these reductions can at best involve constant (and possibly log) factors. However, empirical studies show that these sequential learning procedures actually use many times fewer training examples in prac tice.",
+ "neighbors": [
+ 62,
+ 392,
+ 442,
+ 869
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 307,
+ "label": 2,
+ "text": "Title: Dimension of Recurrent Neural Networks \nAbstract: DIMACS Technical Report 96-56 December 1996 ",
+ "neighbors": [
+ 31,
+ 112,
+ 116,
+ 233
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 308,
+ "label": 1,
+ "text": "Title: Learning and evolution in neural networks \nAbstract: DIMACS Technical Report 96-56 December 1996 ",
+ "neighbors": [
+ 4,
+ 70,
+ 228,
+ 279,
+ 594,
+ 675,
+ 940,
+ 961,
+ 1138
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 309,
+ "label": 0,
+ "text": "Title: Structural Similarity as Guidance in Case-Based Design \nAbstract: This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach.",
+ "neighbors": [
+ 102,
+ 256,
+ 311,
+ 637,
+ 678,
+ 806,
+ 928
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 310,
+ "label": 0,
+ "text": "Title: A Model-Based Approach to Blame-Assignment in Design \nAbstract: We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment.",
+ "neighbors": [
+ 300,
+ 313,
+ 352,
+ 913
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 311,
+ "label": 0,
+ "text": "Title: Task-Oriented Knowledge Acquisition and Reasoning for Design Support Systems \nAbstract: We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe. ",
+ "neighbors": [
+ 102,
+ 256,
+ 309,
+ 637
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 312,
+ "label": 2,
+ "text": "Title: Comparison of Bayesian and Neural Net Unsupervised Classification Techniques \nAbstract: Unsupervised classification is the classification of data into a number of classes in such a way that data in each class are all similar to each other. In the past there have been few if any studies done to compare the performance of different unsupervised classification techniques. In this paper we review Bayesian and neural net approaches to unsupervised classification and present results of experiments that we did to compare Autoclass, a Bayesian classification system, and ART2, a neural net classification algorithm.",
+ "neighbors": [
+ 432,
+ 453,
+ 674
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 313,
+ "label": 0,
+ "text": "Title: Meta-Cases: Explaining Case-Based Reasoning \nAbstract: AI research on case-based reasoning has led to the development of many laboratory case-based systems. As we move towards introducing these systems into work environments, explaining the processes of case-based reasoning is becoming an increasingly important issue. In this paper we describe the notion of a meta-case for illustrating, explaining and justifying case-based reasoning. A meta-case contains a trace of the processing in a problem-solving episode, and provides an explanation of the problem-solving decisions and a (partial) justification for the solution. The language for representing the problem-solving trace depends on the model of problem solving. We describe a task-method-knowledge (TMK) model of problem-solving and describe the representation of meta-cases in the TMK language. We illustrate this explanatory scheme with examples from Interactive Kritik, a computer-based de sign and learning environment presently under development.",
+ "neighbors": [
+ 310
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 314,
+ "label": 2,
+ "text": "Title: Minimum-Risk Profiles of Protein Families Based on Statistical Decision Theory \nAbstract: Statistical decision theory provides a principled way to estimate amino acid frequencies in conserved positions of a protein family. The goal is to minimize the risk function, or the expected squared-error distance between the estimates and the true population frequencies. The minimum-risk estimates are obtained by adding an optimal number of pseudocounts to the observed data. Two formulas are presented, one for pseudocounts based on marginal amino acid frequencies and one for pseudocounts based on the observed data. Experimental results show that profiles constructed using minimal-risk estimates are more discriminating than those constructed using existing methods.",
+ "neighbors": [
+ 0,
+ 3,
+ 150
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 315,
+ "label": 4,
+ "text": "Title: Value Function Based Production Scheduling \nAbstract: Production scheduling, the problem of sequentially configuring a factory to meet forecasted demands, is a critical problem throughout the manufacturing industry. The requirement of maintaining product inventories in the face of unpredictable demand and stochastic factory output makes standard scheduling models, such as job-shop, inadequate. Currently applied algorithms, such as simulated annealing and constraint propagation, must employ ad-hoc methods such as frequent replanning to cope with uncertainty. In this paper, we describe a Markov Decision Process (MDP) formulation of production scheduling which captures stochasticity in both production and demands. The solution to this MDP is a value function which can be used to generate optimal scheduling decisions online. A simple example illustrates the theoretical superiority of this approach over replanning-based methods. We then describe an industrial application and two reinforcement learning methods for generating an approximate value function on this domain. Our results demonstrate that in both deterministic and noisy scenarios, value function approx imation is an effective technique. ",
+ "neighbors": [
+ 45,
+ 318,
+ 327,
+ 1007
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 316,
+ "label": 6,
+ "text": "Title: Efficient Distribution-free Learning of Probabilistic Concepts \nAbstract: In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. ",
+ "neighbors": [
+ 165,
+ 255,
+ 257,
+ 280,
+ 333,
+ 346,
+ 392
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 317,
+ "label": 6,
+ "text": "Title: LEARNING BY USING DYNAMIC FEATURE COMBINATION AND SELECTION \nAbstract: ",
+ "neighbors": [
+ 245,
+ 330,
+ 792,
+ 1245
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 318,
+ "label": 4,
+ "text": "Title: Learning to Act using Real-Time Dynamic Programming \nAbstract: fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). ",
+ "neighbors": [
+ 1,
+ 5,
+ 30,
+ 33,
+ 48,
+ 52,
+ 92,
+ 95,
+ 120,
+ 136,
+ 161,
+ 177,
+ 178,
+ 212,
+ 214,
+ 253,
+ 264,
+ 270,
+ 276,
+ 315,
+ 320,
+ 322,
+ 334,
+ 350,
+ 362,
+ 372,
+ 391,
+ 400,
+ 402,
+ 426,
+ 434
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 319,
+ "label": 2,
+ "text": "Title: Object Selection Based on Oscillatory Correlation \nAbstract: 1 Technical Report: OSU-CISRC-12/96 - TR67, 1996 Abstract One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Because of global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a new architecture that maintains spatial relations between input features. This selection network builds on LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) dynamics and slow inhibition. In an input scene with many objects (patterns), the network selects the largest object. This system can be easily adjusted to select several largest objects, which then alternate in time. We further show that a twostage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. As a special case, the selection network without local excitation gives rise to a new form of oscillatory WTA. ",
+ "neighbors": [
+ 1260
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 320,
+ "label": 4,
+ "text": "Title: Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes \nAbstract: Reinforcement learning (RL) has become a central paradigm for solving learning-control problems in robotics and artificial intelligence. RL researchers have focussed almost exclusively on problems where the controller has to maximize the discounted sum of payoffs. However, as emphasized by Schwartz (1993), in many problems, e.g., those for which the optimal behavior is a limit cycle, it is more natural and com-putationally advantageous to formulate tasks so that the controller's objective is to maximize the average payoff received per time step. In this paper I derive new average-payoff RL algorithms as stochastic approximation methods for solving the system of equations associated with the policy evaluation and optimal control questions in average-payoff RL tasks. These algorithms are analogous to the popular TD and Q-learning algorithms already developed for the discounted-payoff case. One of the algorithms derived here is a significant variation of Schwartz's R-learning algorithm. Preliminary empirical results are presented to validate these new algorithms. ",
+ "neighbors": [
+ 92,
+ 169,
+ 178,
+ 318,
+ 327,
+ 507,
+ 1007
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 321,
+ "label": 3,
+ "text": "Title: A Tutorial on Learning With Bayesian Networks \nAbstract: Technical Report MSR-TR-95-06 ",
+ "neighbors": [
+ 238,
+ 525,
+ 852,
+ 867,
+ 914,
+ 994,
+ 1045,
+ 1087,
+ 1335
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 322,
+ "label": 4,
+ "text": "Title: Scaling Up Average Reward Reinforcement Learning by Approximating the Domain Models and the Value Function \nAbstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ",
+ "neighbors": [
+ 18,
+ 92,
+ 318,
+ 776,
+ 994,
+ 1209
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 323,
+ "label": 6,
+ "text": "Title: Bayesian Methods for Adaptive Models \nAbstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ",
+ "neighbors": [
+ 43,
+ 86,
+ 139,
+ 543
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 324,
+ "label": 4,
+ "text": "Title: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks \nAbstract: Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. ",
+ "neighbors": [
+ 33,
+ 145,
+ 214,
+ 391,
+ 634,
+ 666,
+ 1024
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 325,
+ "label": 4,
+ "text": "Title: Evolving Obstacle Avoidance Behavior in a Robot Arm \nAbstract: Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations.",
+ "neighbors": [
+ 20,
+ 91,
+ 123,
+ 140,
+ 285
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 326,
+ "label": 4,
+ "text": "Title: Reinforcement Learning with Soft State Aggregation \nAbstract: It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approx-imator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but fixed, soft state aggregation, 3) a novel intuitive understanding of the effect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that finds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented. ",
+ "neighbors": [
+ 169,
+ 262,
+ 327,
+ 426
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 327,
+ "label": 4,
+ "text": "Title: Machine Learning Learning to Predict by the Methods of Temporal Differences Keywords: Incremental learning, prediction,\nAbstract: This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. ",
+ "neighbors": [
+ 1,
+ 18,
+ 30,
+ 33,
+ 45,
+ 48,
+ 52,
+ 59,
+ 95,
+ 136,
+ 169,
+ 170,
+ 177,
+ 178,
+ 193,
+ 212,
+ 232,
+ 263,
+ 264,
+ 272,
+ 273,
+ 287,
+ 300,
+ 315,
+ 320,
+ 326,
+ 328,
+ 334,
+ 350,
+ 362,
+ 370,
+ 391,
+ 402,
+ 426,
+ 449,
+ 511,
+ 529,
+ 774,
+ 776,
+ 800,
+ 859,
+ 954,
+ 962,
+ 976,
+ 981,
+ 1007,
+ 1062,
+ 1081,
+ 1253,
+ 1267,
+ 1269,
+ 1326
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 328,
+ "label": 4,
+ "text": "Title: Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming \nAbstract: This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.",
+ "neighbors": [
+ 5,
+ 18,
+ 95,
+ 104,
+ 159,
+ 169,
+ 187,
+ 193,
+ 252,
+ 263,
+ 264,
+ 270,
+ 272,
+ 276,
+ 327,
+ 344,
+ 370,
+ 391,
+ 400,
+ 500,
+ 811,
+ 862,
+ 916,
+ 976,
+ 994,
+ 1162,
+ 1267,
+ 1269
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 329,
+ "label": 4,
+ "text": "Title: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding \nAbstract: On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes (\"rollouts\"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .",
+ "neighbors": [
+ 9,
+ 161,
+ 287
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 330,
+ "label": 6,
+ "text": "Title: A decision-theoretic generalization of on-line learning and an application to boosting how the weight-update rule\nAbstract: We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n",
+ "neighbors": [
+ 147,
+ 257,
+ 294,
+ 317,
+ 445,
+ 586,
+ 619,
+ 714,
+ 809,
+ 846,
+ 946,
+ 1067
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 331,
+ "label": 2,
+ "text": "Title: A New Learning Algorithm for Blind Signal Separation \nAbstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations. ",
+ "neighbors": [
+ 32,
+ 335,
+ 505,
+ 506,
+ 609,
+ 700,
+ 778,
+ 845,
+ 848,
+ 992
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 332,
+ "label": 2,
+ "text": "Title: Avoiding Overfitting with BP-SOM \nAbstract: Overfitting is a well-known problem in the fields of symbolic and connectionist machine learning. It describes the deterioration of gen-eralisation performance of a trained model. In this paper, we investigate the ability of a novel artificial neural network, bp-som, to avoid overfitting. bp-som is a hybrid neural network which combines a multi-layered feed-forward network (mfn) with Kohonen's self-organising maps (soms). During training, supervised back-propagation learning and unsupervised som learning cooperate in finding adequate hidden-layer representations. We show that bp-som outperforms standard backpropagation, and also back-propagation with a weight decay when dealing with the problem of overfitting. In addition, we show that bp-som succeeds in preserving generalisation performance under hidden-unit pruning, where both other methods fail.",
+ "neighbors": [
+ 64,
+ 365,
+ 432,
+ 510
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 333,
+ "label": 6,
+ "text": "Title: On the Learnability of Discrete Distributions (extended abstract) \nAbstract: We describe a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. In particular, this model ensures that an agent makes as few changes as possible to the conditional component of its belief set. Adopting the Ramsey test, minimal conditional revision provides acceptance conditions for arbitrary right-nested conditionals. We show that problem of determining acceptance of any such nested conditional can be reduced to acceptance tests for unnested conditionals. Thus, iterated revision can be accomplished in a virtual manner, using uniterated revision.",
+ "neighbors": [
+ 137,
+ 316,
+ 392,
+ 998,
+ 1056,
+ 1220
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 334,
+ "label": 4,
+ "text": "Title: Issues in Using Function Approximation for Reinforcement Learning \nAbstract: Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings. ",
+ "neighbors": [
+ 95,
+ 318,
+ 327,
+ 426,
+ 489,
+ 511,
+ 776,
+ 1269
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 335,
+ "label": 2,
+ "text": "Title: An information-maximisation approach to blind separation and blind deconvolution \nAbstract: We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. ",
+ "neighbors": [
+ 32,
+ 203,
+ 331,
+ 353,
+ 422,
+ 483,
+ 487,
+ 506,
+ 609,
+ 778,
+ 848,
+ 992
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 336,
+ "label": 3,
+ "text": "Title: Operations for Learning with Graphical Models decomposition techniques and the demonstration that graphical models provide\nAbstract: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. ",
+ "neighbors": [
+ 143,
+ 181,
+ 223,
+ 227,
+ 835,
+ 852,
+ 1087,
+ 1272,
+ 1335
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 337,
+ "label": 0,
+ "text": "Title: Learning to Improve Case Adaptation by Introspective Reasoning and CBR \nAbstract: In current CBR systems, case adaptation is usually performed by rule-based methods that use task-specific rules hand-coded by the system developer. The ability to define those rules depends on knowledge of the task and domain that may not be available a priori, presenting a serious impediment to endowing CBR systems with the needed adaptation knowledge. This paper describes ongoing research on a method to address this problem by acquiring adaptation knowledge from experience. The method uses reasoning from scratch, based on introspective reasoning about the requirements for successful adaptation, to build up a library of adaptation cases that are stored for future reuse. We describe the tenets of the approach and the types of knowledge it requires. We sketch initial computer implementation, lessons learned, and open questions for further study.",
+ "neighbors": [
+ 338,
+ 639,
+ 679,
+ 681,
+ 833
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 338,
+ "label": 0,
+ "text": "Title: Representing Self-knowledge for Introspection about Memory Search \nAbstract: This position paper sketches a framework for modeling introspective reasoning and discusses the relevance of that framework for modeling introspective reasoning about memory search. It argues that effective and flexible memory processing in rich memories should be built on five types of explicitly represented self-knowledge: knowledge about information needs, relationships between different types of information, expectations for the actual behavior of the information search process, desires for its ideal behavior, and representations of how those expectations and desires relate to its actual performance. This approach to modeling memory search is both an illustration of general principles for modeling introspective reasoning and a step towards addressing the problem of how a reasoner human or machinecan acquire knowledge about the properties of its own knowledge base. ",
+ "neighbors": [
+ 26,
+ 337
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 339,
+ "label": 0,
+ "text": "Title: In Machine Learning: A Multistrategy Approach, Vol. IV Macro and Micro Perspectives of Multistrategy Learning \nAbstract: Machine learning techniques are perceived to have a great potential as means for the acquisition of knowledge; nevertheless, their use in complex engineering domains is still rare. Most machine learning techniques have been studied in the context of knowledge acquisition for well defined tasks, such as classification. Learning for these tasks can be handled by relatively simple algorithms. Complex domains present difficulties that can be approached by combining the strengths of several complementing learning techniques, and overcoming their weaknesses by providing alternative learning strategies. This study presents two perspectives, the macro and the micro, for viewing the issue of multistrategy learning. The macro perspective deals with the decomposition of an overall complex learning task into relatively well-defined learning tasks, and the micro perspective deals with designing multistrategy learning techniques for supporting the acquisition of knowledge for each task. The two perspectives are discussed in the context of ",
+ "neighbors": [
+ 151,
+ 475,
+ 834
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 340,
+ "label": 0,
+ "text": "Title: Introspective reasoning using meta-explanations for multistrategy learning \nAbstract: In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task. ",
+ "neighbors": [
+ 26,
+ 35,
+ 376,
+ 639,
+ 680,
+ 718,
+ 1238,
+ 1297
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 341,
+ "label": 3,
+ "text": "Title: A MEAN FIELD LEARNING ALGORITHM FOR UNSUPERVISED NEURAL NETWORKS \nAbstract: We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to sampling. This is done by solving the mean field equations, which relate the statistics of each unit to those of its Markov blanket. Using these statistics as target values, the weights in the network are adapted by a local delta rule. We evaluate the strengths and weaknesses of these networks for problems in statistical pattern recognition. ",
+ "neighbors": [
+ 143,
+ 240,
+ 375
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 342,
+ "label": 5,
+ "text": "Title: An investigation of noise-tolerant relational concept learning algorithms \nAbstract: We discuss the types of noise that may occur in relational learning systems and describe two approaches to addressing noise in a relational concept learning algorithm. We then evaluate each approach experimentally.",
+ "neighbors": [
+ 194,
+ 217,
+ 604,
+ 716,
+ 1108,
+ 1189,
+ 1190
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 343,
+ "label": 2,
+ "text": "Title: NONPARAMETRIC SELECTION OF INPUT VARIABLES FOR CONNECTIONIST LEARNING \nAbstract: Technical Report UMIACS-TR-97-77 and CS-TR-3843 Abstract ",
+ "neighbors": [
+ 50,
+ 122,
+ 240,
+ 1170
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 344,
+ "label": 4,
+ "text": "Title: LEARNING TO AVOID COLLISIONS: A REINFORCEMENT LEARNING PARADIGM FOR MOBILE ROBOT NAVIGATION \nAbstract: The paper describes a self-learning control system for a mobile robot. Based on sensor information the control system has to provide a steering signal in such a way that collisions are avoided. Since in our case no `examples' are available, the system learns on the basis of an external reinforcement signal which is negative in case of a collision and zero otherwise. We describe the adaptive algorithm which is used for a discrete coding of the state space, and the adaptive algorithm for learning the correct mapping from the input (state) vector to the output (steering) signal. ",
+ "neighbors": [
+ 104,
+ 169,
+ 328,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 345,
+ "label": 2,
+ "text": "Title: APPROXIMATION IN L p (R d FROM SPACES SPANNED BY THE PERTURBED INTEGER TRANSLATES OF\nAbstract: May 14, 1995 Abstract. The problem of approximating smooth L p -functions from spaces spanned by the integer translates of a radially symmetric function is very well understood. In case the points of translation, ffi, are scattered throughout R d , the approximation problem is only well understood in the \"stationary\" setting. In this work, we treat the \"non-stationary\" setting under the assumption that ffi is a small perturbation of Z d . Our results, which are similar in many respects to the known results for the case ffi = Z d , apply specifically to the examples of the Gauss kernel and the Generalized Multiquadric.",
+ "neighbors": [
+ 210,
+ 211
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 346,
+ "label": 6,
+ "text": "Title: Toward Efficient Agnostic Learning \nAbstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. ",
+ "neighbors": [
+ 165,
+ 255,
+ 280,
+ 316,
+ 392,
+ 493,
+ 592,
+ 626,
+ 668,
+ 764,
+ 1094
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 347,
+ "label": 0,
+ "text": "Title: THE DESIGN AND IMPLEMENTATION OF A CASE-BASED PLANNING FRAMEWORK WITHIN A PARTIAL-ORDER PLANNER \nAbstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. ",
+ "neighbors": [
+ 173,
+ 348
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 348,
+ "label": 0,
+ "text": "Title: Design and Implementation of a Replay Framework based on a Partial Order Planner \nAbstract: In this paper we describe the design and implementation of the derivation replay framework, dersnlp+ebl (Derivational snlp+ebl), which is based within a partial order planner. dersnlp+ebl replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (ebl) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, dersnlp+ebl normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of dersnlp+ebl in improving planning performance on randomly-generated problems drawn from a complex domain. ",
+ "neighbors": [
+ 347,
+ 636,
+ 672,
+ 906
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 349,
+ "label": 5,
+ "text": "Title: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures \nAbstract: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%. ",
+ "neighbors": [
+ 87,
+ 175,
+ 243
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 350,
+ "label": 4,
+ "text": "Title: Active Gesture Recognition using Partially Observable Markov Decision Processes \nAbstract: M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 367 Appeared 13th IEEE Intl. Conference on Pattern Recognition (ICPR '96), Vienna, Austria. Abstract We present a foveated gesture recognition system that guides an active camera to foveate salient features based on a reinforcement learning paradigm. Using vision routines previously implemented for an interactive environment, we determine the spatial location of salient body parts of a user and guide an active camera to obtain images of gestures or expressions. A hidden-state reinforcement learning paradigm based on the Partially Observable Markov Decision Process (POMDP) is used to implement this visual attention. The attention module selects targets to foveate based on the goal of successful recognition, and uses a new multiple-model Q-learning formulation. Given a set of target and distractor gestures, our system can learn where to foveate to maximally discriminate a particular gesture.",
+ "neighbors": [
+ 318,
+ 327,
+ 357,
+ 962
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 351,
+ "label": 1,
+ "text": "Title: Every Niching Method has its Niche: Fitness Sharing and Implicit Sharing Compared \nAbstract: Various extensions to the Genetic Algorithm (GA) attempt to find all or most optima in a search space containing several optima. Many of these emulate natural speciation. For co-evolutionary learning to succeed in a range of management and control problems, such as learning game strategies, such methods must find all or most optima. However, suitable comparison studies are rare. We compare two similar GA specia-tion methods, fitness sharing and implicit sharing. Using a realistic letter classification problem, we find they have advantages under different circumstances. Implicit sharing covers optima more comprehensively, when the population is large enough for a species to form at each optimum. With a population not large enough to do this, fitness sharing can find the optima with larger basins of attraction, and ignore the peaks with narrow bases, while implicit sharing is more easily distracted. This indicates that for a speciated GA trying to find as many near-global optima as possible, implicit sharing works well only if the population is large enough. This requires prior knowledge of how many peaks exist.",
+ "neighbors": [
+ 91,
+ 632,
+ 1206
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 352,
+ "label": 0,
+ "text": "Title: METHOD-SPECIFIC KNOWLEDGE COMPILATION: TOWARDS PRACTICAL DESIGN SUPPORT SYSTEMS \nAbstract: Modern knowledge systems for design typically employ multiple problem-solving methods which in turn use different kinds of knowledge. The construction of a heterogeneous knowledge system that can support practical design thus raises two fundamental questions: how to accumulate huge volumes of design information, and how to support heterogeneous design processing? Fortunately, partial answers to both questions exist separately. Legacy databases already contain huge amounts of general-purpose design information. In addition, modern knowledge systems typically characterize the kinds of knowledge needed by specific problem-solving methods quite precisely. This leads us to hypothesize method-specific data-to-knowledge compilation as a potential mechanism for integrating heterogeneous knowledge systems and legacy databases for design. In this paper, first we outline a general computational architecture called HIPED for this integration. Then, we focus on the specific issue of how to convert data accessed from a legacy database into a form appropriate to the problem-solving method used in a heterogeneous knowledge system. We describe an experiment in which a legacy knowledge system called Interactive Kritik is integrated with an ORACLE database using IDI as the communication tool. The limited experiment indicates the computational feasibility of method-specific data-to-knowledge compilation, but also raises additional research issues. ",
+ "neighbors": [
+ 310,
+ 390,
+ 599,
+ 913
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 353,
+ "label": 2,
+ "text": "Title: Learning Viewpoint Invariant Representations of Faces in an Attractor Network \nAbstract: In natural visual experience, different views of an object tend to appear in close temporal proximity as an animal manipulates the object or navigates around it. We investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contains successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks & Amit (1993), multiple views of a given subject fall into the same basin of attraction. We use an independent component (ICA) representation of the faces for the input patterns (Bell & Sejnowski, 1995). The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network, suggesting that ICA is a better representation than PCA for object recognition. ",
+ "neighbors": [
+ 335
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 354,
+ "label": 2,
+ "text": "Title: A Support Vector Machine Approach to Decision Trees \nAbstract: Key ideas from statistical learning theory and support vector machines are generalized to decision trees. A support vector machine is used for each decision in the tree. The \"optimal\" decision tree is characterized, and both a primal and dual space formulation for constructing the tree are proposed. The result is a method for generating logically simple decision trees with multivariate linear or nonlinear decisions. The preliminary results indicate that the method produces simple trees that generalize well with respect to other decision tree algorithms and single support vector machines.",
+ "neighbors": [
+ 245,
+ 477,
+ 733
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 355,
+ "label": 2,
+ "text": "Title: Regularization Theory and Neural Networks Architectures \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of ",
+ "neighbors": [
+ 357,
+ 396,
+ 559
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 356,
+ "label": 2,
+ "text": "Title: Interactive Segmentation of Three-dimensional Medical Images (Extended abstract) \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of ",
+ "neighbors": [
+ 417,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 357,
+ "label": 2,
+ "text": "Title: Learning networks for face analysis and synthesis \nAbstract: This paper presents an overview of the face-related projects in our group. The unifying theme underlying our work is the use of example-based learning methods for both analyzing and synthesizing face images. We label the example face images (and for the problem of face detection, \"near miss\" faces as well) with descriptive parameters for pose, expression, identity, and face vs. non-face. Then, by using example-based learning techniques, we develop networks for performing analysis tasks such as pose and expression estimation, face recognition, and face detection in cluttered scenes. In addition to these analysis applications, we show how the example-based technique can also be used as a novel method for image synthesis that is for computer graphics. ",
+ "neighbors": [
+ 213,
+ 350,
+ 355,
+ 370,
+ 417,
+ 521,
+ 543,
+ 613,
+ 625,
+ 633,
+ 736,
+ 759,
+ 830,
+ 872,
+ 929,
+ 956,
+ 969,
+ 1229,
+ 1279,
+ 1341
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 358,
+ "label": 2,
+ "text": "Title: A Generalized Hidden Markov Model for the Recognition of Human Genes in DNA \nAbstract: We present a statistical model of genes in DNA. A Generalized Hidden Markov Model (GHMM) provides the framework for describing the grammar of a legal parse of a DNA sequence (Stormo & Haussler 1994). Probabilities are assigned to transitions between states in the GHMM and to the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. Given a new candidate sequence, the best parse is deduced from the model using a dynamic programming algorithm to identify the path through the model with maximum probability. The GHMM is flexible and modular, so new sensors and additional states can be inserted easily. In addition, it provides simple solutions for integrating cardinality constraints, reading frame constraints, \"indels\", and homology searching. The description and results of an implementation of such a gene-finding model, called Genie, is presented. The exon sensor is a codon frequency model conditioned on windowed nucleotide frequency and the preceding codon. Two neural networks are used, as in (Brunak, Engelbrecht, & Knudsen 1991), for splice site prediction. We show that this simple model performs quite well. For a cross-validated standard test set of 304 genes [ftp://www-hgc.lbl.gov/pub/genesets] in human DNA, our gene-finding system identified up to 85% of protein-coding bases correctly with a specificity of 80%. 58% of exons were exactly identified with a specificity of 51%. Genie is shown to perform favorably compared with several other gene-finding systems. ",
+ "neighbors": [
+ 3,
+ 131,
+ 156,
+ 360,
+ 1113,
+ 1273,
+ 1299
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 359,
+ "label": 6,
+ "text": "Title: Efficient Algorithms for Learning to Play Repeated Games Against Computationally Bounded Adversaries \nAbstract: We study the problem of efficiently learning to play a game optimally against an unknown adversary chosen from a computationally bounded class. We both contribute to the line of research on playing games against finite automata, and expand the scope of this research by considering new classes of adversaries. We introduce the natural notions of games against recent history adversaries (whose current action is determined by some simple boolean formula on the recent history of play), and games against statistical adversaries (whose current action is determined by some simple function of the statistics of the entire history of play). In both cases we give efficient algorithms for learning to play penny-matching and a more difficult game called contract . We also give the most powerful positive result to date for learning to play against finite automata, an efficient algorithm for learning to play any game against any finite automata with probabilistic actions and low cover time. ",
+ "neighbors": [
+ 54
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 360,
+ "label": 2,
+ "text": "Title: A Decision Tree System for Finding Genes in DNA \nAbstract: MORGAN is an integrated system for finding genes in vertebrate DNA sequences. MORGAN uses a variety of techniques to accomplish this task, the most distinctive of which is a decision tree classifier. The decision tree system is combined with new methods for identifying start codons, donor sites, and acceptor sites, and these are brought together in a frame-sensitive dynamic programming algorithm that finds the optimal segmentation of a DNA sequence into coding and noncoding regions (exons and introns). The optimal segmentation is dependent on a separate scoring function that takes a subsequence and assigns to it a score reflecting the probability that the sequence is an exon. The scoring functions in MORGAN are sets of decision trees that are combined to give a probability estimate. Experimental results on a database of 570 vertebrate DNA sequences show that MORGAN has excellent performance by many different measures. On a separate test set, it achieves an overall accuracy of 95%, with a correlation coefficient of 0.78 and a sensitivity and specificity for coding bases of 83% and 79%. In addition, MORGAN identifies 58% of coding exons exactly; i.e., both the beginning and end of the coding regions are predicted correctly. This paper describes the MORGAN system, including its decision tree routines and the algorithms for site recognition, and its performance on a benchmark database of vertebrate DNA. ",
+ "neighbors": [
+ 156,
+ 245,
+ 358,
+ 1090
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 361,
+ "label": 2,
+ "text": "Title: The Use of Neural Networks to Support \"Intelligent\" Scientific Computing \nAbstract: In this paper we report on the use of backpropagation based neural networks to implement a phase of the computational intelligence process of the PYTHIA[3] expert system for supporting the numerical simulation of applications modelled by partial differential equations (PDEs). PYTHIA is an exemplar based reasoning system that provides advice on what method and parameters to use for the simulation of a specified PDE based application. When advice is requested, the characteristics of the given model are matched with the characteristics of previously seen classes of models. The performance of various solution methods on previously seen similar classes of models is then used as a basis for predicting what method to use. Thus, a major step of the reasoning process in PYTHIA involves the analysis and categorization of models into classes of models based on their characteristics. In this study we demonstrate the use of neural networks to identify the class of predefined models whose characteristics match the ones of the specified PDE based application. ",
+ "neighbors": [
+ 126
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 362,
+ "label": 4,
+ "text": "Title: Reinforcement Learning Methods for Continuous-Time Markov Decision Problems \nAbstract: Semi-Markov Decision Problems are continuous time generalizations of discrete time Markov Decision Problems. A number of reinforcement learning algorithms have been developed recently for the solution of Markov Decision Problems, based on the ideas of asynchronous dynamic programming and stochastic approximation. Among these are TD(), Q-learning, and Real-time Dynamic Programming. After reviewing semi-Markov Decision Problems and Bellman's optimality equation in that context, we propose algorithms similar to those named above, adapted to the solution of semi-Markov Decision Problems. We demonstrate these algorithms by applying them to the problem of determining the optimal control for a simple queueing system. We conclude with a discussion of circumstances under which these algorithms may be usefully ap plied.",
+ "neighbors": [
+ 269,
+ 318,
+ 327,
+ 426,
+ 1007
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 363,
+ "label": 2,
+ "text": "Title: CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS \nAbstract: There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) [1] has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and Two Spirals. ",
+ "neighbors": [
+ 40
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 364,
+ "label": 3,
+ "text": "Title: State-Space Abstraction for Anytime Evaluation of Probabilistic Networks \nAbstract: One important factor determining the computa - tional complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an any - time procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the proce - dure exhibits a smooth improvement in approxi - mation quality as computation time increases. This suggests that statespace abstraction is one more useful control parameter for designing real-time probabilistic reasoners. ",
+ "neighbors": [
+ 373,
+ 606,
+ 660,
+ 1046,
+ 1123,
+ 1209
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 365,
+ "label": 6,
+ "text": "Title: Measuring the Difficulty of Specific Learning Problems \nAbstract: Existing complexity measures from contemporary learning theory cannot be conveniently applied to specific learning problems (e.g., training sets). Moreover, they are typically non-generic, i.e., they necessitate making assumptions about the way in which the learner will operate. The lack of a satisfactory, generic complexity measure for learning problems poses difficulties for researchers in various areas; the present paper puts forward an idea which may help to alleviate these. It shows that supervised learning problems fall into two, generic, complexity classes only one of which is associated with computational tractability. By determining which class a particular problem belongs to, we can thus effectively evaluate its degree of generic difficulty. ",
+ "neighbors": [
+ 64,
+ 91,
+ 332,
+ 383,
+ 406
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 366,
+ "label": 2,
+ "text": "Title: Scatter-partitioning RBF network for function regression and image \nAbstract: segmentation: Preliminary results Abstract. Scatter-partitioning Radial Basis Function (RBF) networks increase their number of degrees of freedom with the complexity of an input-output mapping to be estimated on the basis of a supervised training data set. Due to its superior expressive power a scatter-partitioning Gaussian RBF (GRBF) model, termed Supervised Growing Neural Gas (SGNG), is selected from the literature. SGNG employs a one-stage error-driven learning strategy and is capable of generating and removing both hidden units and synaptic connections. A slightly modified SGNG version is tested as a function estimator when the training surface to be fitted is an image, i.e., a 2-D signal whose size is finite. The relationship between the generation, by the learning system, of disjointed maps of hidden units and the presence, in the image, of pictorially homogeneous subsets (segments) is investigated. Unfortunately, the examined SGNG version performs poorly both as function estimator and image segmenter. This may be due to an intrinsic inadequacy of the one-stage error-driven learning strategy to adjust structural parameters and output weights simultaneously but consistently. In the framework of RBF networks, further studies should investigate the combination of two-stage error-driven learning strategies with synapse generation and removal criteria. y Internal report of the paper entitled \"Image segmentation with scatter-partitioning RBF networks: A feasibility study,\" to be presented at the conference Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation, part of SPIE's International Symposium on Optical Science, Engineering and Instrumentation, 19-24 July 1998, San Diego, CA. ",
+ "neighbors": [
+ 153,
+ 399
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 367,
+ "label": 2,
+ "text": "Title: Learning Symbolic Rules Using Artificial Neural Networks \nAbstract: A distinct advantage of symbolic learning algorithms over artificial neural networks is that typically the concept representations they form are more easily understood by humans. One approach to understanding the representations formed by neural networks is to extract symbolic rules from trained networks. In this paper we describe and investigate an approach for extracting rules from networks that uses (1) the NofM extraction algorithm, and (2) the network training method of soft weight-sharing. Previously, the NofM algorithm had been successfully applied only to knowledge-based neural networks. Our experiments demonstrate that our extracted rules generalize better than rules learned using the C4.5 system. In addition to being accurate, our extracted rules are also reasonably comprehensible.",
+ "neighbors": [
+ 602,
+ 712,
+ 871,
+ 934
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 368,
+ "label": 0,
+ "text": "Title: Using Introspective Reasoning to Select Learning Strategies \nAbstract: In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling. ",
+ "neighbors": [
+ 855,
+ 857
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 369,
+ "label": 3,
+ "text": "Title: Toward Optimal Feature Selection \nAbstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.",
+ "neighbors": [
+ 227,
+ 242,
+ 371,
+ 884,
+ 1032,
+ 1280
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 370,
+ "label": 4,
+ "text": "Title: Chapter 1 Reinforcement Learning for Planning and Control \nAbstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.",
+ "neighbors": [
+ 169,
+ 327,
+ 328,
+ 357,
+ 811
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 371,
+ "label": 6,
+ "text": "Title: Learning Boolean Concepts in the Presence of Many Irrelevant Features \nAbstract: In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research. ",
+ "neighbors": [
+ 62,
+ 98,
+ 118,
+ 148,
+ 216,
+ 242,
+ 271,
+ 369,
+ 380,
+ 384
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 372,
+ "label": 4,
+ "text": "Title: Robot Shaping: Developing Situated Agents through Learning \nAbstract: Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to \"shape\" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents. ",
+ "neighbors": [
+ 169,
+ 318,
+ 878,
+ 1081,
+ 1143,
+ 1144,
+ 1156,
+ 1169,
+ 1346
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 373,
+ "label": 3,
+ "text": "Title: Computational complexity reduction for BN2O networks using similarity of states \nAbstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity. ",
+ "neighbors": [
+ 295,
+ 364
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 374,
+ "label": 6,
+ "text": "Title: Learning a set of primitive actions with an Induction of decision trees. Machine Learning, 1(1):81-106,\nAbstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity. ",
+ "neighbors": [
+ 127,
+ 169,
+ 200,
+ 216,
+ 220,
+ 247,
+ 252,
+ 299,
+ 407,
+ 453,
+ 587,
+ 735,
+ 812,
+ 818,
+ 858,
+ 1028,
+ 1037,
+ 1142,
+ 1247
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 375,
+ "label": 3,
+ "text": "Title: Bayesian Unsupervised Learning of Higher Order Structure \nAbstract: Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers higher order structure using EM and Gibbs sampling. The model can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher levels. We demonstrate the performance of the algorithm on bench mark problems.",
+ "neighbors": [
+ 143,
+ 341,
+ 969
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 376,
+ "label": 0,
+ "text": "Title: Modeling Case-based Planning for Repairing Reasoning Failures \nAbstract: One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component. ",
+ "neighbors": [
+ 26,
+ 340,
+ 1270
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 377,
+ "label": 3,
+ "text": "Title: On the Markov Equivalence of Chain Graphs, Undirected Graphs, and Acyclic Digraphs \nAbstract: Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg's (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.",
+ "neighbors": [
+ 121,
+ 181
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 378,
+ "label": 0,
+ "text": "Title: Concept Learning and Heuristic Classification in Weak-Theory Domains 1 \nAbstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.",
+ "neighbors": [
+ 258,
+ 435,
+ 916,
+ 1117,
+ 1193
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 379,
+ "label": 4,
+ "text": "Title: Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks \nAbstract: This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach.",
+ "neighbors": [
+ 270,
+ 276,
+ 382,
+ 868
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 380,
+ "label": 2,
+ "text": "Title: A Monotonic Measure for Optimal Feature Selection \nAbstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.",
+ "neighbors": [
+ 242,
+ 371
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 381,
+ "label": 5,
+ "text": "Title: ARB: A Hardware Mechanism for Dynamic Reordering of Memory References* \nAbstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.",
+ "neighbors": [
+ 142
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 382,
+ "label": 4,
+ "text": "Title: Reinforcement Learning: A Survey \nAbstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word \"reinforcement.\" The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.",
+ "neighbors": [
+ 80,
+ 379,
+ 449
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 383,
+ "label": 2,
+ "text": "Title: Trading Spaces: Computation, Representation and the Limits of Uninformed Learning \nAbstract: fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary. ",
+ "neighbors": [
+ 91,
+ 365
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 384,
+ "label": 6,
+ "text": "Title: Efficient Algorithms for Identifying Relevant Features \nAbstract: This paper describes efficient methods for exact and approximate implementation of the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first introduce FOCUS-2, a new algorithm that exactly implements the MIN-FEATURES bias. This algorithm is empirically shown to be substantially faster than the FOCUS algorithm previously given in [ Al-muallim and Dietterich, 1991 ] . We then introduce the Mutual-Information-Greedy, Simple-Greedy and Weighted-Greedy algorithms, which apply efficient heuristics for approximating the MIN-FEATURES bias. These algorithms employ greedy heuristics that trade optimality for computational efficiency. Experimental studies show that the learning performance of ID3 is greatly improved when these algorithms are used to preprocess the training data by eliminating the irrelevant features from ID3's consideration. In particular, the Weighted-Greedy algorithm provides an excellent and efficient approximation of the MIN ",
+ "neighbors": [
+ 371
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 385,
+ "label": 2,
+ "text": "Title: Free Energy Minimization Algorithm for Decoding and Cryptanalysis three binary vectors: s of length N\nAbstract: where A is a binary matrix. Our task is to infer s given z and A, and given assumptions about the statistical properties of s and n. This problem arises in the decoding of a noisy signal transmitted using a linear code A, and in the inference of the sequence of a linear feedback shift register (LFSR) from noisy observations [1, 2]. ",
+ "neighbors": [
+ 100
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 386,
+ "label": 2,
+ "text": "Title: Perceptual Development and Learning: From Behavioral, Neurophysiological, and Morphological Evidence To Computational Models \nAbstract: An intelligent system has to be capable of adapting to a constantly changing environment. It therefore, ought to be capable of learning from its perceptual interactions with its surroundings. This requires a certain amount of plasticity in its structure. Any attempt to model the perceptual capabilities of a living system or, for that matter, to construct a synthetic system of comparable abilities, must therefore, account for such plasticity through a variety of developmental and learning mechanisms. This paper examines some results from neuroanatomical, morphological, as well as behavioral studies of the development of visual perception; integrates them into a computational framework; and suggests several interesting experiments with computational models that can yield insights into the development of visual perception. In order to understand the development of information processing structures in the brain, one needs knowledge of changes it undergoes from birth to maturity in the context of a normal environment. However, knowledge of its development in aberrant settings is also extremely useful, because it reveals the extent to which the development is a function of environmental experience (as opposed to genetically determined pre-wiring). Accordingly, we consider development of the visual system under both normal and restricted rearing conditions. The role of experience in the early development of the sensory systems in general, and the visual system in particular, has been widely studied through a variety of experiments involving carefully controlled manipulation of the environment presented to an animal. Extensive reviews of such results can be found in (Mitchell, 1984; Movshon, 1981; Hirsch, 1986; Boothe, 1986; Singer, 1986). Some examples of manipulation of visual experience are total pattern deprivation (e.g., dark rearing), selective deprivation of a certain class of patterns (e.g., vertical lines), monocular deprivation in animals with binocular vision, etc. Extensive studies involving behavioral deficits resulting from total visual pattern deprivation indicate that the deficits arise primarily as a result of impairment of visual information processing in the brain. The results of these experiments suggest specific developmental or learning mechanisms that may be operating at various stages of development, and at different levels in the system. We will discuss some of these hhhhhhhhhhhhhhh This is a working draft. All comments, especially constructive criticism and suggestions for improvement, will be appreciated. I am indebted to Prof. James Dannemiller for introducing me to some of the literature in infant development; to Prof. Leonard Uhr for his helpful comments on an initial draft of the paper; and to numerous researchers whose experimental work has provided the basis for the model outlined in this paper. This research was partially supported by grants from the National Science Foundation and the University of Wisconsin Graduate School. ",
+ "neighbors": [
+ 283,
+ 286,
+ 1235
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 387,
+ "label": 2,
+ "text": "Title: Recognizing Handwritten Digits Using Mixtures of Linear Models \nAbstract: We construct a mixture of locally linear generative models of a collection of pixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EM-based algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12] about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance.",
+ "neighbors": [
+ 149,
+ 274,
+ 503,
+ 763,
+ 1041,
+ 1061,
+ 1102,
+ 1166,
+ 1181,
+ 1298
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 388,
+ "label": 2,
+ "text": "Title: Nonlinear gated experts for time series: discovering regimes and avoiding overfitting \nAbstract: In: International Journal of Neural Systems 6 (1995) p. 373 - 399. URL of this paper: ftp://ftp.cs.colorado.edu/pub/Time-Series/MyPapers/experts.ps.Z, or http://www.cs.colorado.edu/~andreas/Time-Series/MyPapers/experts.ps.Z University of Colorado Computer Science Technical Report CU-CS-798-95. In the analysis and prediction of real-world systems, two of the key problems are nonstationarity(often in the form of switching between regimes), and overfitting (particularly serious for noisy processes). This article addresses these problems using gated experts, consisting of a (nonlinear) gating network, and several (also nonlinear) competing experts. Each expert learns to predict the conditional mean, and each expert adapts its width to match the noise level in its regime. The gating network learns to predict the probability of each expert, given the input. This article focuses on the case where the gating network bases its decision on information from the inputs. This can be contrasted to hidden Markov models where the decision is based on the previous state(s) (i.e., on the output of the gating network at the previous time step), as well as to averaging over several predictors. In contrast, gated experts soft-partition the input space. This article discusses the underlying statistical assumptions, derives the weight update rules, and compares the performance of gated experts to standard methods on three time series: (1) a computer-generated series, obtained by randomly switching between two nonlinear processes, (2) a time series from the Santa Fe Time Series Competition (the light intensity of a laser in chaotic state), and (3) the daily electricity demand of France, a real-world multivariate problem with structure on several time scales. The main results are (1) the gating network correctly discovers the different regimes of the process, (2) the widths associated with each expert are important for the segmentation task (and they can be used to characterize the sub-processes), and (3) there is less overfitting compared to single networks (homogeneous multi-layer perceptrons), since the experts learn to match their variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data. ",
+ "neighbors": [
+ 154,
+ 180,
+ 559,
+ 736,
+ 1241,
+ 1242
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 389,
+ "label": 5,
+ "text": "Title: Drug design by machine learning: Modelling drug activity \nAbstract: This paper describes an approach to modelling drug activity using machine learning tools. Some experiments in modelling the quantitative structure-activity relationship (QSAR) using a standard, Hansch, method and a machine learning system Golem were already reported in the literature. The paper describes the results of applying two other machine learning systems, Magnus Assistant and Retis, on the same data. The results achieved by the machine learning systems, are better then the results of the Hansch method; therefore, machine learning tools can be considered as very promising for solving that kind of problems. The given results also illustrate the variations of performance of the different machine learning systems applied to this drug design problem.",
+ "neighbors": [
+ 290
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 390,
+ "label": 0,
+ "text": "Title: Rule Based Database Integration in HIPED Heterogeneous Intelligent Processing in Engineering Design \nAbstract: In this paper 1 we describe one aspect of our research in the project called HIPED, which addressed the problem of performing design of engineering devices by accessing heterogeneous databases. The front end of the HIPED system consisted of interactive KRI-TIK, a multimodal reasoning system that combined case based and model based reasoning to solve a design problem. This paper focuses on the backend processing where five types of queries received from the front end are evaluated by mapping them appropriately using the \"facts\" about the schemas of the underlying databases and \"rules\" that establish the correspondance among the data in these databases in terms of relationships such as equivalence, overlap and set containment. The uniqueness of our approach stems from the fact that the mapping process is very forgiving in that the query received from the front end is evaluated with respect to a large number of possibilities. These possibilities are encoded in the form of rules that consider various ways in which the tokens in the given query may match relation names, attrribute names, or values in the underlying tables. The approach has been implemented using CORAL deductive database system as the rule processing engine. ",
+ "neighbors": [
+ 352
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 391,
+ "label": 4,
+ "text": "Title: Learning to Achieve Goals \nAbstract: Temporal difference methods solve the temporal credit assignment problem for reinforcement learning. An important subproblem of general reinforcement learning is learning to achieve dynamic goals. Although existing temporal difference methods, such as Q learning, can be applied to this problem, they do not take advantage of its special structure. This paper presents the DG-learning algorithm, which learns efficiently to achieve dynamically changing goals and exhibits good knowledge transfer between goals. In addition, this paper shows how traditional relaxation techniques can be applied to the problem. Finally, experimental results are given that demonstrate the superiority of DG learning over Q learning in a moderately large, synthetic, non-deterministic domain.",
+ "neighbors": [
+ 318,
+ 324,
+ 327,
+ 328,
+ 811
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 392,
+ "label": 6,
+ "text": "Title: Cryptographic Limitations on Learning Boolean Formulae and Finite Automata \nAbstract: In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosys- tems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a gener - alization of graph coloring. fl This research was conducted while the author was at Harvard University and supported by an A.T.& T. Bell Laboratories scholarship. y Supported by grants ONR-N00014-85-K-0445, NSF-DCR-8606366 and NSF-CCR-89-02500, DAAL03-86-K-0171, DARPA AFOSR 89-0506, and by SERC. ",
+ "neighbors": [
+ 62,
+ 257,
+ 306,
+ 316,
+ 333,
+ 346,
+ 537,
+ 572,
+ 726,
+ 754,
+ 767,
+ 780,
+ 812,
+ 1073,
+ 1204,
+ 1220,
+ 1333,
+ 1349
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 393,
+ "label": 5,
+ "text": "Title: Combining FOIL and EBG to Speed-up Logic Programs \nAbstract: This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.",
+ "neighbors": [
+ 198,
+ 801,
+ 802
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 394,
+ "label": 2,
+ "text": "Title: Algebraic Transformations of Objective Functions \nAbstract: Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of the winner-take-all network from O(N 2 ) connections to O(N ). Also susceptible are inexact graph-matching, random dot matching, convolutions and coordinate transformations, and sorting. Simulations show that fixpoint-preserving transformations may be applied repeatedly and elaborately, and the example networks still robustly converge. ",
+ "neighbors": [
+ 304
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 395,
+ "label": 0,
+ "text": "Title: Developing Case-Based Reasoning for Structural Design \nAbstract: The use of case-based reasoning as a process model of design involves the subtasks of recalling previously known designs from memory and adapting these design cases or subcases to fit the current design context. The development of this process model for a particular design domain proceeds in parallel with the development of a representation for the cases, the case memory organisation, and the design knowledge needed in addition to specific designs. The selection of a particular representational paradigm for these types of information, and the details of its use for a particular problemsolving domain, depend on the intended use of the information to be represented and the project information available, as well as the nature of the domain. In this paper we describe the development and implementation of four case-based design systems: CASECAD, CADSYN, WIN, and DEMEX. Each system is described in terms of the content, organisation, and source of case memory, and the implementation of case recall and case adaptation. A comparison of these systems considers the relative advantages and disadvantages of the implementations. ",
+ "neighbors": [
+ 15
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 396,
+ "label": 2,
+ "text": "Title: Cortical Mechanisms of Visual Recognition and Learning: A Hierarchical Kalman Filter Model \nAbstract: We describe a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. Each hierarchical level in the network predicts the current visual recognition state at a lower level and adapts its own recognition state using the residual error between the prediction and the actual lower-level state. Simultaneously, the network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors. The Kalman filter model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas, and assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. Previous work elucidated the usefulness of this model in explaining neurophysiological phenomena such as endstopping and other related extra-classical receptive field effects. In this paper, in addition to providing a more detailed exposition of the model, we present a variety of experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences in the presence of varying amounts of occlusion, background clutter, and noise. ",
+ "neighbors": [
+ 40,
+ 355,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 397,
+ "label": 3,
+ "text": "Title: Finding Overlapping Distributions with MML \nAbstract: This paper considers an aspect of mixture modelling. Significantly overlapping distributions require more data for their parameters to be accurately estimated than well separated distributions. For example, two Gaussian distributions are considered to significantly overlap when their means are within three standard deviations of each other. If insufficient data is available, only a single component distribution will be estimated, although the data originates from two component distributions. We consider how much data is required to distinguish two component distributions from one distribution in mixture modelling using the minimum message length (MML) criterion. First, we perform experiments which show the MML criterion performs well relative to other Bayesian criteria. Second, we make two improvements to the existing MML estimates, that improve its performance with overlapping distributions. ",
+ "neighbors": [
+ 90,
+ 302,
+ 453
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 398,
+ "label": 2,
+ "text": "Title: Performance of the GCel-512 and PowerXPlorer for parallel neural network simulations \nAbstract: This report presents new results from work performed in the framework of the IC 3 A pro-gramme. Using the GCel-512 and the PowerXPlorer made available by the UvA, a performance prediction model for several neural network simulations could be validated quantitatively both for a larger processor grid and for a different target parallel processor configuration. The performance prediction model and its application on a popular neural network model | backpropagation | decomposed via network decomposition will be discussed here. Using the model, the suitability of the GCel-512 and PowerXPlorer are discussed in terms of performance, speedup, efficiency and scalability.",
+ "neighbors": [
+ 432
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 399,
+ "label": 2,
+ "text": "Title: Growing Cell Structures A Self-organizing Network for Unsupervised and Supervised Learning \nAbstract: We present a new self-organizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible in contrast to earlier approaches toperform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented which are better than any results previously published. fl submitted for publication",
+ "neighbors": [
+ 63,
+ 366,
+ 427,
+ 430,
+ 872,
+ 943,
+ 1231
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 400,
+ "label": 4,
+ "text": "Title: Exploration and Model Building in Mobile Robot Domains \nAbstract: I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours.",
+ "neighbors": [
+ 33,
+ 117,
+ 145,
+ 318,
+ 328,
+ 484
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 401,
+ "label": 0,
+ "text": "Title: Instance Pruning Techniques \nAbstract: The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements and slow execution speed, as well as a sensitivity to noise. This paper provides a discussion of issues related to reducing the number of instances retained in memory while maintaining (and sometimes improving) generalization accuracy, and mentions algorithms other researchers have used to address this problem. It presents three intuitive noise-tolerant algorithms that can be used to prune instances from the training set. In experiments on 29 applications, the algorithm that achieves the highest reduction in storage also results in the highest generalization accuracy of the three methods.",
+ "neighbors": [
+ 250,
+ 1258
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 402,
+ "label": 4,
+ "text": "Title: Reinforcement Learning in the Multi-Robot Domain \nAbstract: This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environemnts such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the ap proach on a group of four mobile robots learning a foraging task.",
+ "neighbors": [
+ 318,
+ 327,
+ 868,
+ 920
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 403,
+ "label": 2,
+ "text": "Title: FURTHER FACTS ABOUT INPUT TO STATE STABILIZATION \"Further facts about input to state stabilization\", IEEE\nAbstract: Report SYCON-88-15 ABSTRACT Previous results about input to state stabilizability are shown to hold even for systems which are not linear in controls, provided that a more general type of feedback be allowed. Applications to certain stabilization problems and coprime factorizations, as well as comparisons to other results on input to state stability, are also briefly discussed. ",
+ "neighbors": [
+ 305,
+ 820
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 404,
+ "label": 2,
+ "text": "Title: GAL: Networks that grow when they learn and shrink when they forget \nAbstract: Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. \"Grow and Learn\" (GAL) is a new algorithm that learns an association at one-shot due to being incremental and using a local representation. During the so-called \"sleep\" phase, units that were previously stored but which are no longer necessary due to recent modifications are removed to minimize network complexity. The incrementally constructed network can later be finetuned off-line to improve performance. Another method proposed that greatly increases recognition accuracy is to train a number of networks and vote over their responses. The algorithm and its variants are tested on recognition of handwritten numerals and seem promising especially in terms of learning speed. This makes the algorithm attractive for on-line learning tasks, e.g., in robotics. The biological plausibility of incremental learning is also discussed briefly. Earlier part of this work was realized at the Laboratoire de Microinformatique of Ecole Polytechnique Federale de Lausanne and was supported by the Fonds National Suisse de la Recherche Scientifique. Later part was realized at and supported by the International Computer Science Institute. A number of people helped by guiding, stimulating discussions or questions: Subutai Ahmad, Peter Clarke, Jerry Feldman, Christian Jutten, Pierre Marchal, Jean Daniel Nicoud, Steve Omohondro and Leon Personnaz. ",
+ "neighbors": [
+ 240,
+ 432,
+ 931
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 405,
+ "label": 2,
+ "text": "Title: Learning to Predict Reading Frames in E. coli DNA Sequences \nAbstract: Two fundamental problems in analyzing DNA sequences are (1) locating the regions of a DNA sequence that encode proteins, and (2) determining the reading frame for each region. We investigate using artificial neural networks (ANNs) to find coding regions, determine reading frames, and detect frameshift errors in E. coli DNA sequences. We describe our adaptation of the approach used by Uberbacher and Mural to identify coding regions in human DNA, and we compare the performance of ANNs to several conventional methods for predicting reading frames. Our experiments demonstrate that ANNs can outperform these conventional approaches. ",
+ "neighbors": [
+ 208,
+ 240,
+ 271,
+ 797
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 406,
+ "label": 6,
+ "text": "Title: Is Transfer Inductive? \nAbstract: Work is currently underway to devise learning methods which are better able to transfer knowledge from one task to another. The process of knowledge transfer is usually viewed as logically separate from the inductive procedures of ordinary learning. However, this paper argues that this `seperatist' view leads to a number of conceptual difficulties. It offers a task analysis which situates the transfer process inside a generalised inductive protocol. It argues that transfer should be viewed as a subprocess within induction and not as an independent procedure for transporting knowledge between learning trials.",
+ "neighbors": [
+ 365,
+ 432
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 407,
+ "label": 2,
+ "text": "Title: Experiments on the Transfer of Knowledge between Neural Networks Reprinted from: Computational Learning Theory and\nAbstract: This chapter describes three studies which address the question of how neural network learning can be improved via the incorporation of information extracted from other networks. This general problem, which we call network transfer, encompasses many types of relationships between source and target networks. Our focus is on the utilization of weights from source networks which solve a subproblem of the target network task, with the goal of speeding up learning on the target task. We demonstrate how the approach described here can improve learning speed by up to ten times over learning starting with random weights. ",
+ "neighbors": [
+ 4,
+ 374,
+ 917
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 408,
+ "label": 2,
+ "text": "Title: EXPERIMENTING WITH THE CHEESEMAN-STUTZ EVIDENCE APPROXIMATION FOR PREDICTIVE MODELING AND DATA MINING \nAbstract: TECHNICAL REPORT NO. 947 June 5, 1995 ",
+ "neighbors": [
+ 298,
+ 1033
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 409,
+ "label": 6,
+ "text": "Title: Large Margin Classification Using the Perceptron Algorithm \nAbstract: We introduce and analyze a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time. We also show that our algorithm can be efficiently used in very high dimensional spaces using kernel functions. We performed some experiments using our algorithm, and some variants of it, for classifying images of handwritten digits. The performance of our algorithm is close to, but not as good as, the performance of maximal-margin classifiers on the same problem.",
+ "neighbors": [
+ 255
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 410,
+ "label": 5,
+ "text": "Title: Converting Thread-Level Parallelism to Instruction-Level Parallelism via Simultaneous Multithreading \nAbstract: A version of this paper will appear in ACM Transactions on Computer Systems, August 1997. Permission to make digital copies of part or all of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Abstract To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue superscalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel-processing styles statically partition processor resources, thus preventing them from adapting to dynamically-changing levels of ILP and TLP in a program. With insufficient TLP, processors in an MP will be idle; with insufficient ILP, multiple-issue hardware on a superscalar is wasted. This paper explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processors resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processors functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. When a program has only a single thread, all of the SMT processors resources can be dedicated to that thread; when more TLP exists, this parallelism can compensate for a lack of ",
+ "neighbors": [
+ 87,
+ 110,
+ 111
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 411,
+ "label": 2,
+ "text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
+ "neighbors": [
+ 3,
+ 143,
+ 633
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 412,
+ "label": 2,
+ "text": "Title: Convergence and new operations in SDM new method for converging in the SDM memory, utilizing\nAbstract: Report R95:13 ISRN : SICS-R--95/13-SE ISSN : 0283-3638 Abstract ",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 413,
+ "label": 1,
+ "text": "Title: Tracking the red queen: Measurements of adaptive progress in co-evolution ary simulations. In Third European\nAbstract: Current expert systems cannot properly handle imprecise and incomplete information. On the other hand, neural networks can perform pattern recognition operations even in noisy environments. Against this background, we have implemented a neural expert system shell NEULA, whose computational mechanism processes imprecisely or incompletely given information by means of approximate probabilistic reasoning. ",
+ "neighbors": [
+ 123,
+ 234,
+ 594,
+ 1337
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 414,
+ "label": 3,
+ "text": "Title: FLEXIBLE PARAMETRIC MEASUREMENT ERROR MODELS \nAbstract: Inferences in measurement error models can be sensitive to modeling assumptions. Specifically, if the model is incorrect then the estimates can be inconsistent. To reduce sensitivity to modeling assumptions and yet still retain the efficiency of parametric inference we propose to use flexible parametric models which can accommodate departures from standard parametric models. We use mixtures of normals for this purpose. We study two cases in detail: a linear errors-in-variables model and a change-point Berkson model. fl Raymond J. Carroll is Professor of Statistics, Nutrition and Toxicology, Department of Statistics, Texas A&M University, College Station, TX 77843-3143. Kathryn Roeder is Associate Professor, and Larry Wasser-man is Professor, Department of Statistics, Carnegie-Mellon University, Pittsburgh PA 15213-3890. Carroll's research was supported by a grant from the National Cancer Institute (CA-57030). Roeder's research was supported by NSF grant DMS-9496219. Wasserman's research was supported by NIH grant RO1-CA54852 and NSF grants DMS-9303557 and DMS-9357646. ",
+ "neighbors": [
+ 90,
+ 199
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 415,
+ "label": 1,
+ "text": "Title: Orgy in the Computer: Multi-Parent Reproduction in Genetic Algorithms \nAbstract: In this paper we investigate the phenomenon of multi-parent reproduction, i.e. we study recombination mechanisms where an arbitrary n > 1 number of parents participate in creating children. In particular, we discuss scanning crossover that generalizes the standard uniform crossover and diagonal crossover that generalizes 1-point crossover, and study the effects of different number of parents on the GA behavior. We conduct experiments on tough function optimization problems and observe that by multi-parent operators the performance of GAs can be enhanced significantly. We also give a theoretical foundation by showing how these operators work on distributions.",
+ "neighbors": [
+ 91,
+ 482,
+ 683,
+ 729,
+ 1107
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 416,
+ "label": 1,
+ "text": "Title: Information filtering: Selection mechanisms in learning systems. Machine Learning, 10:113-151, 1993. Generalization as search. Artificial\nAbstract: Draft A Brief Introduction to Neural Networks Richard D. De Veaux Lyle H. Ungar Williams College University of Pennsylvania Abstract Artificial neural networks are being used with increasing frequency for high dimensional problems of regression or classification. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques. KEYWORDS: nonparametric regression; function approximation; backpropagation. 1 Introduction Networks that mimic the way the brain works; computer programs that actually LEARN patterns; forecasting without having to know statistics. These are just some of the many claims and attractions of artificial neural networks. Neural networks (we will henceforth drop the term artificial, unless we need to distinguish them from biological neural networks) seem to be everywhere these days, and at least in their advertising, are able to do all that statistics can do without all the fuss and bother of having to do anything except buy a piece of software. Neural networks have been successfully used for many different applications including robotics, chemical process control, speech recognition, optical character recognition, credit card fraud detection, interpretation of chemical spectra and vision for autonomous navigation of vehicles. (Pointers to the literature are given at the end of this article.) In this article we will attempt to explain how one particular type of neural network, feedforward networks with sigmoidal activation functions (\"backpropagation networks\") actually works, how it is \"trained\", and how it compares with some more well known statistical techniques. As an example of why someone would want to use a neural network, consider the problem of recognizing hand written ZIP codes on letters. This is a classification problem, where the 1 ",
+ "neighbors": [
+ 91,
+ 106,
+ 300,
+ 636,
+ 1017,
+ 1297
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 417,
+ "label": 2,
+ "text": "Title: Parzen. On estimation of a probability density function and mode. Annual Mathematical Statistics, 33:1065-1076, 1962.\nAbstract: To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians. ",
+ "neighbors": [
+ 49,
+ 356,
+ 357,
+ 432,
+ 631
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 418,
+ "label": 3,
+ "text": "Title: Maximum Working Likelihood Inference with Markov Chain Monte Carlo \nAbstract: Maximum working likelihood (MWL) inference in the presence of missing data can be quite challenging because of the intractability of the associated marginal likelihood. This problem can be further exacerbated when the number of parameters involved is large. We propose using Markov chain Monte Carlo (MCMC) to first obtain both the MWL estimator and the working Fisher information matrix and, second, using Monte Carlo quadrature to obtain the remaining components of the correct asymptotic MWL variance. Evaluation of the marginal likelihood is not needed. We demonstrate consistency and asymptotic normality when the number of independent and identically distributed data clusters is large but the likelihood may be incorrectly specified. An analysis of longitudinal ordinal data is given for an example. KEY WORDS: Convergence of posterior distributions, Maximum likelihood, Metropolis ",
+ "neighbors": [
+ 21,
+ 25
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 419,
+ "label": 1,
+ "text": "Title: Using Problem Generators to Explore the Effects of Epistasis \nAbstract: In this paper we develop an empirical methodology for studying the behavior of evolutionary algorithms based on problem generators. We then describe three generators that can be used to study the effects of epistasis on the performance of EAs. Finally, we illustrate the use of these ideas in a preliminary exploration of the effects of epistasis on simple GAs.",
+ "neighbors": [
+ 91,
+ 579,
+ 643,
+ 986
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 420,
+ "label": 1,
+ "text": "Title: On the Virtues of Parameterized Uniform Crossover \nAbstract: Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues.",
+ "neighbors": [
+ 138,
+ 545,
+ 579,
+ 640,
+ 732,
+ 816
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 421,
+ "label": 2,
+ "text": "Title: Learning Sequential Tasks by Incrementally Adding Higher Orders \nAbstract: An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higher-order connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks.",
+ "neighbors": [
+ 201,
+ 446,
+ 1024
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 422,
+ "label": 2,
+ "text": "Title: LEARNING FACTORIAL CODES BY PREDICTABILITY MINIMIZATION (Neural Computation, 4(6):863-879, 1992) \nAbstract: I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus. I discuss various simple yet potentially powerful implementations of the principle which aim at finding binary factorial codes (Bar-low et al., 1989), i.e. codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Such codes are potentially relevant for (1) segmentation tasks, (2) speeding up supervised learning, (3) novelty detection. Methods for finding factorial codes automatically implement Occam's razor for finding codes using a minimal number of units. Unlike previous methods the novel principle has a potential for removing not only linear but also non-linear output redundancy. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended input sequences.",
+ "neighbors": [
+ 335,
+ 469,
+ 924
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 423,
+ "label": 5,
+ "text": "Title: The Limits of Instruction Level Parallelism in SPEC95 Applications \nAbstract: This paper examines the limits to instruction level parallelism that can be found in programs, in particular the SPEC95 benchmark suite. It differs from earlier studies in removing non-essential true dependencies that occur as a result of the compiler employing a stack for subroutine linkage. This is a subtle limitation to parallelism that is not readily evident as it appears as a true dependency on the stack pointer. In this paper we show that its removal exposes far more parallelism than has been seen previously. We refer to this type of parallelism as \"parallelism at a distance\" because it requires impossibly large instruction windows for detection. We conclude with two observations: 1) that a single instruction window characteristic of superscalar machines is inadequate for detecting parallelism at a distance; and 2) in order to take advantage of this parallelism the compiler must be involved, or separate threads must be explicitly programmed. ",
+ "neighbors": [
+ 110,
+ 1112,
+ 1285,
+ 1332
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 424,
+ "label": 2,
+ "text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
+ "neighbors": [
+ 3,
+ 143,
+ 633
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 425,
+ "label": 2,
+ "text": "Title: The Role of Constraints in Hebbian Learning \nAbstract: Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a \"graded\" receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is \"sharpened\" to a subset of maximally-correlated inputs. If two equivalent input populations (e.g. two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed.",
+ "neighbors": [
+ 432,
+ 1079
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 426,
+ "label": 4,
+ "text": "Title: On the Convergence of Stochastic Iterative Dynamic Programming Algorithms \nAbstract: Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD() algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD() and Q-learning belong. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
+ "neighbors": [
+ 120,
+ 318,
+ 326,
+ 327,
+ 334,
+ 362,
+ 666,
+ 774,
+ 954,
+ 962,
+ 1326
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 427,
+ "label": 2,
+ "text": "Title: Incremental Grid Growing: Encoding High-Dimensional Structure into a Two-Dimensional Feature Map \nAbstract: Knowledge of clusters and their relations is important in understanding high-dimensional input data with unknown distribution. Ordinary feature maps with fully connected, fixed grid topology cannot properly reflect the structure of clusters in the input space|there are no cluster boundaries on the map. Incremental feature map algorithms, where nodes and connections are added to or deleted from the map according to the input distribution, can overcome this problem. However, so far such algorithms have been limited to maps that can be drawn in 2-D only in the case of 2-dimensional input space. In the approach proposed in this paper, nodes are added incrementally to a regular, 2-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space. The process results in a map that explicitly represents the cluster structure of the high-dimensional input. ",
+ "neighbors": [
+ 114,
+ 399,
+ 432
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 428,
+ "label": 1,
+ "text": "Title: Learning to be Selective in Genetic-Algorithm-Based Design Optimization \nAbstract: Lattice conditional independence (LCI) models for multivariate normal data recently have been introduced for the analysis of non-monotone missing data patterns and of nonnested dependent linear regression models ( seemingly unrelated regressions). It is shown here that the class of LCI models coincides with a subclass of the class of graphical Markov models determined by acyclic digraphs (ADGs), namely, the subclass of transitive ADG models. An explicit graph - theoretic characterization of those ADGs that are Markov equivalent to some transitive ADG is obtained. This characterization allows one to determine whether a specific ADG D is Markov equivalent to some transitive ADG, hence to some LCI model, in polynomial time, without an exhaustive search of the (exponentially large) equivalence class [D ]. These results do not require the existence or positivity of joint densities.",
+ "neighbors": [
+ 36,
+ 91,
+ 429,
+ 1084,
+ 1334
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 429,
+ "label": 1,
+ "text": "Title: A Genetic Algorithm for Continuous Design Space Search \nAbstract: Genetic algorithms (GAs) have been extensively used as a means for performing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains the simple, classical implementation of a GA based on binary encoding and bit mutation and crossover is often inefficient and unable to reach the global optimum. In this paper we describe a GA for continuous design-space optimization that uses new GA operators and strategies tailored to the structure and properties of engineering design domains. Empirical results in the domains of supersonic transport aircraft and supersonic missile inlets demonstrate that the newly formulated GA can be significantly better than the classical GA in both efficiency and reliability. ",
+ "neighbors": [
+ 36,
+ 91,
+ 428,
+ 1084
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 430,
+ "label": 2,
+ "text": "Title: References \"Using Neural Networks to Identify Jets\", Kohonen, \"Self Organized Formation of Topologically Correct Feature\nAbstract: 2] D. E. Rumelhart, G. E. Hinton and R. J. Williams, \"Learning Internal Representations by Error Propagation\", in D. E. Rumelhart and J. L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Vol. 1), MIT Press (1986). ",
+ "neighbors": [
+ 6,
+ 19,
+ 51,
+ 63,
+ 69,
+ 115,
+ 203,
+ 272,
+ 399,
+ 447,
+ 872,
+ 943,
+ 967,
+ 969,
+ 1021,
+ 1138
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 431,
+ "label": 2,
+ "text": "Title: A New Look at Tree Models for Multiple Sequence Alignment \nAbstract: Evolutionary trees are frequently used as the underlying model in the design of algorithms, optimization criteria and software packages for multiple sequence alignment (MSA). In this paper, we reexamine the suitability of trees as a universal model for MSA in light of the broad range of biological questions that MSA's are used to address. A tree model consists of a tree topology and a model of accepted mutations along the branches. After surveying the major applications of MSA, examples from the molecular biology literature are used to illustrate situations in which this tree model fails. This occurs when the relationship between residues in a column cannot be described by a tree; for example, in some structural and functional applications of MSA. It also occurs in situations, such as lateral gene transfer, where an entire gene cannot be modeled by a unique tree. In cases of nonparsimonous data or convergent evolution, it may be difficult to find a consistent mutational model. We hope that this survey will promote dialogue between biologists and computer scientists, leading to more biologically realistic research on MSA.",
+ "neighbors": [
+ 3,
+ 172
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 432,
+ "label": 2,
+ "text": "Title: Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex. \nAbstract: Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.",
+ "neighbors": [
+ 6,
+ 12,
+ 46,
+ 51,
+ 63,
+ 64,
+ 69,
+ 79,
+ 83,
+ 104,
+ 114,
+ 115,
+ 117,
+ 132,
+ 180,
+ 209,
+ 213,
+ 287,
+ 303,
+ 312,
+ 332,
+ 344,
+ 356,
+ 396,
+ 398,
+ 404,
+ 406,
+ 417,
+ 425,
+ 427,
+ 443,
+ 447,
+ 453
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 433,
+ "label": 3,
+ "text": "Title: Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function \nAbstract: Technical Report No. 9722, Department of Statistics, University of Toronto Abstract. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easily-constructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling efficiency by suppressing random walk behaviour. Random walks can also be avoided in some slice sampling schemes that simultaneously update all variables. ",
+ "neighbors": [
+ 73,
+ 74,
+ 1044
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 434,
+ "label": 4,
+ "text": "Title: On the Complexity of Solving Markov Decision Problems \nAbstract: Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the struc ture of MDPs.",
+ "neighbors": [
+ 276,
+ 318,
+ 811,
+ 1269
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 435,
+ "label": 0,
+ "text": "Title: Analysis and Empirical Studies of Derivational Analogy \nAbstract: Derivational analogy is a technique for reusing problem solving experience to improve problem solving performance. This research addresses an issue common to all problem solvers that use derivational analogy: overcoming the mismatches between past experiences and new problems that impede reuse. First, this research describes the variety of mismatches that can arise and proposes a new approach to derivational analogy that uses appropriate adaptation strategies for each. Second, it compares this approach with seven others in a common domain. This empirical study shows that derivational analogy is almost always more efficient than problem solving from scratch, but the amount it contributes depends on its ability to overcome mismatches ",
+ "neighbors": [
+ 378,
+ 906
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 436,
+ "label": 2,
+ "text": "Title: Analysis of Dynamical Recognizers \nAbstract: Pollack (1991) demonstrated that second-order recurrent neural networks can act as dynamical recognizers for formal languages when trained on positive and negative examples, and observed both phase transitions in learning and IFS-like fractal state sets. Follow-on work focused mainly on the extraction and minimization of a finite state automaton (FSA) from the trained network. However, such networks are capable of inducing languages which are not regular, and therefore not equivalent to any FSA. Indeed, it may be simpler for a small network to fit its training data by inducing such a non-regular language. But when is the network's language not regular? In this paper, using a low dimensional network capable of learning all the Tomita data sets, we present an empirical method for testing whether the language induced by the network is regular or not. We also provide a detailed \"-machine analysis of trained networks for both regular and non-regular languages. ",
+ "neighbors": [
+ 230,
+ 249
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 437,
+ "label": 1,
+ "text": "Title: of a simulator for evolving morphology are: Universal the simulator should cover an infinite gen\nAbstract: Funes, P. and Pollack, J. (1997) Computer Evolution of Buildable Objects. Fourth European Conference on Artificial Life. P. Husbands and I. Harvey, eds., MIT Press. pp 358-367. knowledge into the program, which would result in familiar structures, we provided the algorithm with a model of the physical reality and a purely utilitarian fitness function, thus supplying measures of feasibility and functionality. In this way the evolutionary process runs in an environment that has not been unnecessarily constrained. We added, however, a requirement of computability to reject overly complex structures when they took too long for our simulations to evaluate. The results are encouraging. The evolved structures had a surprisingly alien look: they are not based in common knowledge on how to build with brick toys; instead, the computer found ways of its own through the evolutionary search process. We were able to assemble the final designs manually and confirm that they accomplish the objectives introduced with our fitness functions. After some background on related problems, we describe our physical simulation model for two-dimensional Lego structures, and the representation for encoding them and applying evolution. We demonstrate the feasibility of our work with photos of actual objects which were the result of particular optimizations. Finally, we discuss future work and draw some conclusions. In order to evolve both the morphology and behavior of autonomous mechanical devices which can be manufactured, one must have a simulator which operates under several constraints, and a resultant controller which is adaptive enough to cover the gap between simulated and real world. eral space of mechanisms. Conservative - because simulation is never perfect, it should preserve a margin of safety. Efficient - it should be quicker to test in simulation than through physical production and test. Buildable - results should be convertible from a simula tion to a real object Computer Evolution of Buildable Objects Abstract The idea of co-evolution of bodies and brains is becoming popular, but little work has been done in evolution of physical structure because of the lack of a general framework for doing it. Evolution of creatures in simulation has been constrained by the reality gap which implies that resultant objects are usually not buildable. The work we present takes a step in the problem of body evolution by applying evolutionary techniques to the design of structures assembled out of parts. Evolution takes place in a simulator we designed, which computes forces and stresses and predicts failure for 2-dimensional Lego structures. The final printout of our program is a schematic assembly, which can then be built physically. We demonstrate its functionality in several different evolved entities.",
+ "neighbors": [
+ 106,
+ 439,
+ 786
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 438,
+ "label": 5,
+ "text": "Title: Knowledge Acquisition via Knowledge Integration \nAbstract: In this paper we are concerned with the problem of acquiring knowledge by integration. Our aim is to construct an integrated knowledge base from several separate sources. The need to merge knowledge bases can arise, for example, when knowledge bases are acquired independently from interactions with several domain experts. As opinions of different domain experts may differ, the knowledge bases constructed in this way will normally differ too. A similar problem can also arise whenever separate knowledge bases are generated by learning algorithms. The objective of integration is to construct one system that exploits all the knowledge that is available and has a good performance. The aim of this paper is to discuss the methodology of knowledge integration, describe the implemented system (INTEG.3), and present some concrete results which demonstrate the advantages of this method. ",
+ "neighbors": [
+ 97
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 439,
+ "label": 1,
+ "text": "Title: Evolving Self-Supporting Structures Page 18 References Evolution of Visual Control Systems for Robots. To appear\nAbstract: In this paper we are concerned with the problem of acquiring knowledge by integration. Our aim is to construct an integrated knowledge base from several separate sources. The need to merge knowledge bases can arise, for example, when knowledge bases are acquired independently from interactions with several domain experts. As opinions of different domain experts may differ, the knowledge bases constructed in this way will normally differ too. A similar problem can also arise whenever separate knowledge bases are generated by learning algorithms. The objective of integration is to construct one system that exploits all the knowledge that is available and has a good performance. The aim of this paper is to discuss the methodology of knowledge integration, describe the implemented system (INTEG.3), and present some concrete results which demonstrate the advantages of this method. ",
+ "neighbors": [
+ 91,
+ 106,
+ 123,
+ 437,
+ 491
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 440,
+ "label": 1,
+ "text": "Title: A COMPRESSION ALGORITHM FOR PROBABILITY TRANSITION MATRICES \nAbstract: This paper describes a compression algorithm for probability transition matrices. The compressed matrix is itself a probability transition matrix. In general the compression is not error-free, but the error appears to be small even for high levels of compression. ",
+ "neighbors": [
+ 56,
+ 1064
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 441,
+ "label": 3,
+ "text": "Title: BAYESIAN STATISTICS 6, pp. 000--000 Exact sampling for Bayesian inference: towards general purpose algorithms \nAbstract: There are now methods for organising a Markov chain Monte Carlo simulation so that it can be guaranteed that the state of the process at a given time is exactly drawn from the target distribution. The question of assessing convergence totally vanishes. Such methods are known as exact or perfect sampling. The approach that has received most attention uses the protocol of coupling from the past devised by Propp and Wilson (Random Structures and Algorithms,1996), in which multiple dependent paths of the chain are run from different initial states at a sequence of initial times going backwards into the past, until they satisfy the condition of coalescence by time 0. When this is achieved the state at time 0 is distributed according to the required target. This process must be implemented very carefully to assure its validity (including appropriate re-use of random number streams), and also requires one of various tricks to enable us to follow infinitely many sample paths with a finite amount of work. With the ultimate objective of Bayesian MCMC with guaranteed convergence, the purpose of this paper is to describe recent efforts to construct exact sampling methods for continuous-state Markov chains. We review existing methods based on gamma-coupling and rejection sampling (Murdoch and Green, Scandinavian Journal of Statistics, 1998), that are quite straightforward to understand, but require a closed form for the transition kernel and entail cumbersome algebraic manipulation. We then introduce two new methods based on random walk Metropolis, that offer the prospect of more automatic use, not least because the difficult, continuous, part of the transition mechanism can be coupled in a generic way, using a proposal distribution of convenience. One of the methods is based on a neat decomposition of any unimodal (multivariate) symmetric density into pieces that may be re-assembled to construct any translated copy of itself: that allows coupling of a continuum of Metropolis proposals to a finite set, at least for a compact state space. We discuss methods for economically coupling the subsequent accept/reject decisions. Our second new method deals with unbounded state spaces, using a trick due to W. S. Kendall of running a coupled dominating process in parallel with the sample paths of interest. The random subset of the state space below the dominating path is compact, allowing efficient coupling and coalescence. We look towards the possibility that application of such methods could become sufficiently convenient that they could become the basis for routine Bayesian computation in the foreseeable future. ",
+ "neighbors": [
+ 11,
+ 55,
+ 90
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 442,
+ "label": 6,
+ "text": "Title: Apple Tasting and Nearly One-Sided Learning \nAbstract: In the standard on-line model the learning algorithm tries to minimize the total number of mistakes made in a series of trials. On each trial the learner sees an instance, either accepts or rejects that instance, and then is told the appropriate response. We define a natural variant of this model (\"apple tasting\") where the learner gets feedback only when the instance is accepted. We use two transformations to relate the apple tasting model to an enhanced standard model where false acceptances are counted separately from false rejections. We present a strategy for trading between false acceptances and false rejections in the standard model. From one perspective this strategy is exactly optimal, including constants. We apply our results to obtain a good general purpose apple tasting algorithm as well as nearly optimal apple tasting algorithms for a variety of standard classes, such as conjunctions and disjunctions of n boolean variables. We also present and analyze a simpler transformation useful when the instances are drawn at random rather than selected by an adversary. ",
+ "neighbors": [
+ 2,
+ 306
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 443,
+ "label": 2,
+ "text": "Title: PREENS, a Parallel Research Execution Environment for Neural Systems \nAbstract: PREENS a Parallel Research Execution Environment for Neural Systems is a distributed neurosimulator, targeted on networks of workstations and transputer systems. As current applications of neural networks often contain large amounts of data and as the neural networks involved in tasks such as vision are very large, high requirements on memory and computational resources are imposed on the target execution platforms. PREENS can be executed in a distributed environment, i.e. tools and neural network simulation programs can be running on any machine connectable via TCP/IP. Using this approach, larger tasks and more data can be examined using an efficient coarse grained parallelism. Furthermore, the design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a trans-puter system. In this paper, the different features and design concepts of PREENS are discussed. These can also be used for other applications, like image processing.",
+ "neighbors": [
+ 432,
+ 1018
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 444,
+ "label": 2,
+ "text": "Title: Keeping Neural Networks Simple by Minimizing the Description Length of the Weights \nAbstract: Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-off between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed efficiently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.",
+ "neighbors": [
+ 43,
+ 86,
+ 100,
+ 561,
+ 1288
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 445,
+ "label": 6,
+ "text": "Title: Learning to Order Things \nAbstract: There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF(u; v), which indicates whether it is advisable to rank u before v. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the \"Hedge\" algorithm, for finding a good linear combination of ranking \"experts.\" We use the ordering algorithm combined with the on-line learning algorithm to find a combination of \"search experts,\" each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. ",
+ "neighbors": [
+ 147,
+ 330
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 446,
+ "label": 2,
+ "text": "Title: A Connectionist Symbol Manipulator That Discovers the Structure of Context-Free Languages \nAbstract: We present a neural net architecture that can discover hierarchical and recursive structure in symbol strings. To detect structure at multiple levels, the architecture has the capability of reducing symbols substrings to single symbols, and makes use of an external stack memory. In terms of formal languages, the architecture can learn to parse strings in an LR(0) context-free grammar. Given training sets of positive and negative exemplars, the architecture has been trained to recognize many different grammars. The architecture has only one layer of modifiable weights, allowing for a Many cognitive domains involve complex sequences that contain hierarchical or recursive structure, e.g., music, natural language parsing, event perception. To illustrate, \"the spider that ate the hairy fly\" is a noun phrase containing the embedded noun phrase \"the hairy fly.\" Understanding such multilevel structures requires forming reduced descriptions (Hinton, 1988) in which a string of symbols or states (\"the hairy fly\") is reduced to a single symbolic entity (a noun phrase). We present a neural net architecture that learns to encode the structure of symbol strings via such reduction transformations. The difficult problem of extracting multilevel structure from complex, extended sequences has been studied by Mozer (1992), Ring (1993), Rohwer (1990), and Schmidhuber (1992), among others. While these previous efforts have made some straightforward interpretation of its behavior.",
+ "neighbors": [
+ 201,
+ 421
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 447,
+ "label": 2,
+ "text": "Title: SELF-ORGANIZING PROCESS BASED ON LATERAL INHIBITION AND SYNAPTIC RESOURCE REDISTRIBUTION \nAbstract: Self-organizing feature maps are usually implemented by abstracting the low-level neural and parallel distributed processes. An external supervisor finds the unit whose weight vector is closest in Euclidian distance to the input vector and determines the neighborhood for weight adaptation. The weights are changed proportional to the Euclidian distance. In a biologically more plausible implementation, similarity is measured by a scalar product, neighborhood is selected through lateral inhibition and weights are changed by redistributing synaptic resources. The resulting self-organizing process is quite similar to the abstract case. However, the process is somewhat hampered by boundary effects and the parameters need to be carefully evolved. It is also necessary to add a redundant dimension to the input vectors.",
+ "neighbors": [
+ 430,
+ 432
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 448,
+ "label": 3,
+ "text": "Title: [12] J. Whittaker. Graphical Models in Applied Mathematical Multivariate Statis- \nAbstract: Self-organizing feature maps are usually implemented by abstracting the low-level neural and parallel distributed processes. An external supervisor finds the unit whose weight vector is closest in Euclidian distance to the input vector and determines the neighborhood for weight adaptation. The weights are changed proportional to the Euclidian distance. In a biologically more plausible implementation, similarity is measured by a scalar product, neighborhood is selected through lateral inhibition and weights are changed by redistributing synaptic resources. The resulting self-organizing process is quite similar to the abstract case. However, the process is somewhat hampered by boundary effects and the parameters need to be carefully evolved. It is also necessary to add a redundant dimension to the input vectors.",
+ "neighbors": [
+ 181,
+ 648,
+ 698,
+ 835,
+ 1139,
+ 1140
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 449,
+ "label": 4,
+ "text": "Title: Reinforcement Learning with Imitation in Heterogeneous Multi-Agent Systems \nAbstract: The application of decision making and learning algorithms to multi-agent systems presents many interestingresearch challenges and opportunities. Among these is the ability for agents to learn how to act by observing or imitating other agents. We describe an algorithm, the IQ-algorithm, that integrates imitation with Q-learning. Roughly, a Q-learner uses the observations it has made of an expert agent to bias its exploration in promising directions. This algorithm goes beyond previous work in this direction by relaxing the oft-made assumptions that the learner (observer) and the expert (observed agent) share the same objectives and abilities. Our preliminary experiments demonstrate significant transfer between agents using the IQ-model and in many cases reductions in training time. ",
+ "neighbors": [
+ 327,
+ 382,
+ 916,
+ 939
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 450,
+ "label": 2,
+ "text": "Title: Face Recognition: A Hybrid Neural Network Approach \nAbstract: Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult (Turk and Pentland, 1991). We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multilayer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multilayer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach (Turk and Pentland, 1991) on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer. ",
+ "neighbors": [
+ 191
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 451,
+ "label": 3,
+ "text": "Title: CAUSATION, ACTION, AND COUNTERFACTUALS \nAbstract: We present a new algorithm for solving Markov decision problems that extends the modified policy iteration algorithm of Puterman and Shin [6] in two important ways: 1) The new algorithm is asynchronous in that it allows the values of states to be updated in arbitrary order, and it does not need to consider all actions in each state while updating the policy. 2) The new algorithm converges under more general initial conditions than those required by modified policy iteration. Specifically, the set of initial policy-value function pairs for which our algorithm guarantees convergence is a strict superset of the set for which modified policy iteration converges. This generalization was obtained by making a simple and easily implementable change to the policy evaluation operator used in updating the value function. Both the asynchronous nature of our algorithm and its convergence under more general conditions expand the range of problems to which our algorithm can be applied. ",
+ "neighbors": [
+ 196,
+ 225,
+ 742,
+ 1106,
+ 1139
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 452,
+ "label": 6,
+ "text": "Title: On the Sample Complexity of Noise-Tolerant Learning \nAbstract: In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) \"(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. ",
+ "neighbors": [
+ 29,
+ 62
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 453,
+ "label": 2,
+ "text": "Title: Monte Carlo Comparison of Non-hierarchical Unsupervised Classifiers \nAbstract: In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) \"(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. ",
+ "neighbors": [
+ 312,
+ 374,
+ 397,
+ 432,
+ 674
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 454,
+ "label": 1,
+ "text": "Title: Evolving Visual Routines Architecture and Planning, \nAbstract: It has been recently realized that parasite virulence (the harm caused by parasites to their hosts) can be an adaptive trait. Selection for a particular level of virulence can happen either at at the level of between-host tradeoffs or as a result of short-sighted within-host competition. This paper describes some simulations which study the effect that modifier genes for changes in mutation rate have on suppressing this short-sighted development of virulence, and investigates the interaction between this and a simplified model of im mune clearance.",
+ "neighbors": [
+ 91,
+ 491,
+ 667,
+ 853
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 455,
+ "label": 2,
+ "text": "Title: on Qualitative Reasoning about Physical Systems Deriving Monotonic Function Envelopes from Observations \nAbstract: Much work in qualitative physics involves constructing models of physical systems using functional descriptions such as \"flow monotonically increases with pressure.\" Semiquantitative methods improve model precision by adding numerical envelopes to these monotonic functions. Ad hoc methods are normally used to determine these envelopes. This paper describes a systematic method for computing a bounding envelope of a multivariate monotonic function given a stream of data. The derived envelope is computed by determining a simultaneous confidence band for a special neural network which is guaranteed to produce only monotonic functions. By composing these envelopes, more complex systems can be simulated using semiquantitative methods. ",
+ "neighbors": [
+ 852
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 456,
+ "label": 0,
+ "text": "Title: Resolving PP attachment Ambiguities with Memory-Based Learning \nAbstract: In this paper we describe the application of Memory-Based Learning to the problem of Prepositional Phrase attachment disambiguation. We compare Memory-Based Learning, which stores examples in memory and generalizes by using intelligent similarity metrics, with a number of recently proposed statistical methods that are well suited to large numbers of features. We evaluate our methods on a common benchmark dataset and show that our method compares favorably to previous methods, and is well-suited to incorporating various unconventional representations of word patterns such as value difference metrics and Lexical Space.",
+ "neighbors": [
+ 653,
+ 743,
+ 787,
+ 990
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 457,
+ "label": 3,
+ "text": "Title: Hidden Markov decision trees \nAbstract: We study a time series model that can be viewed as a decision tree with Markov temporal structure. The model is intractable for exact calculations, thus we utilize variational approximations. We consider three different distributions for the approximation: one in which the Markov calculations are performed exactly and the layers of the decision tree are decoupled, one in which the decision tree calculations are performed exactly and the time steps of the Markov chain are decoupled, and one in which a Viterbi-like assumption is made to pick out a single most likely state sequence. We present simulation results for artificial data and the Bach chorales. Accepted for oral presentation at NIPS*96. ",
+ "neighbors": [
+ 40,
+ 722,
+ 723,
+ 799
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 458,
+ "label": 3,
+ "text": "Title: Stochastic simulation algorithms for dynamic probabilistic networks \nAbstract: Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, \"evidence reversal\" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called \"survival of the fittest\" sampling (SOF), \"repopulates\" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation.",
+ "neighbors": [
+ 546,
+ 711,
+ 791,
+ 1209,
+ 1246
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 459,
+ "label": 1,
+ "text": "Title: Stochastic Random or probabilistic but with some direction. For example the arrival of people at\nAbstract: Simulated Annealing Search technique where a single trial solution is modified at random. An energy is defined which represents how good the solution is. The goal is to find the best solution by minimising the energy. Changes which lead to a lower energy are always accepted; an increase is probabilistically accepted. The probability is given by exp(E=k B T ). Where E is the change in energy, k B is a constant and T is the Temperature. Initially the temperature is high corresponding to a liquid or molten state where large changes are possible and it is progressively reduced using a cooling schedule so allowing smaller changes until the system solidifies at a low energy solution. ",
+ "neighbors": [
+ 234,
+ 788
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 460,
+ "label": 6,
+ "text": "Title: Learning with Rare Cases and Small Disjuncts \nAbstract: Systems that learn from examples often create a disjunctive concept definition. Small disjuncts are those disjuncts which cover only a few training examples. The problem with small disjuncts is that they are more error prone than large disjuncts. This paper investigates the reasons why small disjuncts are more error prone than large disjuncts. It shows that when there are rare cases within a domain, then factors such as attribute noise, missing attributes, class noise and training set size can result in small disjuncts being more error prone than large disjuncts and in rare cases being more error prone than common cases. This paper also assesses the impact that these error prone small disjuncts and rare cases have on inductive learning (i.e., on error rate). One key conclusion is that when low levels of attribute noise are applied only to the training set (the ability to learn the correct concept is being evaluated), rare cases within a domain are primarily responsible for making learning difficult.",
+ "neighbors": [
+ 694,
+ 840
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 461,
+ "label": 6,
+ "text": "Title: Asking Questions to Minimize Errors \nAbstract: A number of efficient learning algorithms achieve exact identification of an unknown function from some class using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes in the on-line model) that can be made by a learning algorithm that makes a polynomial number of membership queries and uses polynomial computation time. We are able both to reduce the number of equivalence queries used by the previous algorithms and often to prove matching lower bounds. As an example, consider the class of DNF formulas over n variables with at most k = O(log n) terms. Previously, the algorithm of Blum and Rudich [BR92] provided the best known upper bound of 2 O(k) log n for the minimum number of equivalence queries needed for exact identification. We greatly improve on this upper bound showing that exactly k counterexamples are needed if the learner knows k a priori and exactly k +1 counterexamples are needed if the learner does not know k a priori. This exactly matches known lower bounds [BC92]. For many of our results we obtain a complete characterization of the tradeoff between the number of membership and equivalence queries needed for exact identification. The classes we consider here are monotone DNF formulas, Horn sentences, O(log n)-term DNF formulas, read-k sat-j DNF formulas, read-once formulas over various bases, and deterministic finite automata. ",
+ "neighbors": [
+ 572,
+ 869,
+ 927
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 462,
+ "label": 1,
+ "text": "Title: A Survey of Evolution Strategies \nAbstract: ",
+ "neighbors": [
+ 22,
+ 91,
+ 499,
+ 545,
+ 610,
+ 611,
+ 629,
+ 640,
+ 645,
+ 676,
+ 702,
+ 745,
+ 746,
+ 777,
+ 807,
+ 817,
+ 948,
+ 958
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 463,
+ "label": 0,
+ "text": "Title: A utility-based approach to learning in a mixed Case-Based and Model-Based Reasoning architecture \nAbstract: Case-based reasoning (CBR) can be used as a form of \"caching\" solved problems to speedup later problem solving. Using \"cached\" cases brings additional costs with it due to retrieval time, case adaptation time and also storage space. Simply storing all cases will result in a situation in which retrieving and trying to adapt old cases will take more time (on average) than not caching at all. This means that caching must be applied selectively to build a case memory that is actually useful. This is a form of the utility problem [4, 2]. The approach taken here is to construct a \"cost model\" of a system that can be used to predict the effect of changes to the system. In this paper we describe the utility problem associated with \"caching\" cases and the construction of a \"cost model\". We present experimental results that demonstrate that the model can be used to predict the effect of certain changes to the case memory.",
+ "neighbors": [
+ 636,
+ 942
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 464,
+ "label": 0,
+ "text": "Title: Massively Parallel Support for Case-based Planning \nAbstract: In case-based planning (CBP), previously generated plans are stored as cases in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over planning from scratch (generative planning), thus offering a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory that requires significant domain engineering and complex memory indexing schemes to enable efficient case retrieval. In contrast, our CBP system, CaPER, is based on a massively parallel frame-based AI language and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large casebases can be used; and memory can be probed in numerous alternate ways, allowing more specific retrieval of stored plans that better fit a target problem with less adaptation. fl Preliminary version of an article appearing in IEEE Expert, February 1994, pp. 8-14. This paper is an extended version of [1]. ",
+ "neighbors": [
+ 182
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 465,
+ "label": 3,
+ "text": "Title: Double Censoring: Characterization and Computation of the Nonparametric Maximum Likelihood Estimator \nAbstract: In case-based planning (CBP), previously generated plans are stored as cases in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over planning from scratch (generative planning), thus offering a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory that requires significant domain engineering and complex memory indexing schemes to enable efficient case retrieval. In contrast, our CBP system, CaPER, is based on a massively parallel frame-based AI language and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large casebases can be used; and memory can be probed in numerous alternate ways, allowing more specific retrieval of stored plans that better fit a target problem with less adaptation. fl Preliminary version of an article appearing in IEEE Expert, February 1994, pp. 8-14. This paper is an extended version of [1]. ",
+ "neighbors": [
+ 558
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 466,
+ "label": 1,
+ "text": "Title: Optimal and Asymptotically Optimal Equi-partition of Rectangular Domains via Stripe Decomposition \nAbstract: We present an efficient method for assigning any number of processors to tasks associated with the cells of a rectangular uniform grid. Load balancing equi-partition constraints are observed while approximately minimizing the total perimeter of the partition, which corresponds to the amount of interprocessor communication. This method is based upon decomposition of the grid into stripes of \"optimal\" height. We prove that under some mild assumptions, as the problem size grows large in all parameters, the error bound associated with this feasible solution approaches zero. We also present computational results from a high level parallel Genetic Algorithm that utilizes this method, and make comparisons with other methods. On a network of workstations, our algorithm solves within minutes instances of the problem that would require one billion binary variables in a Quadratic Assignment formulation.",
+ "neighbors": [
+ 27,
+ 205
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 467,
+ "label": 2,
+ "text": "Title: Critical Points for Least-Squares Problems Involving Certain Analytic Functions, with Applications to Sigmoidal Nets \nAbstract: This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually called \"single-hidden layer neural networks,\" which are built upon the standard sigmoidal activation tanh(x) (or equivalently (1 + e x ) 1 ), a rough upper bound for this cardinality is provided as well.",
+ "neighbors": [
+ 539,
+ 565
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 468,
+ "label": 0,
+ "text": "Title: The Role of Generic Models in Conceptual Change \nAbstract: 1 This research was funded in part by NSF Grant No. IRI-92-10925 and in part by ONR Grant No. N00014-92-J-1234. We thank John Clement for the use of his protocol transcript, James Greeno for his contribution to developing our constructive modeling interpretation of it, and Ryan Tweney for his helpful comments Todd W. Griffith, Nancy J. Nersessian, and Ashok Goel Abstract We hypothesize generic models to be central in conceptual change in science. This hypothesis has its origins in two theoretical sources. The first source, constructive modeling, derives from a philosophical theory that synthesizes analyses of historical conceptual changes in science with investigations of reasoning and representation in cognitive psychology. The theory of constructive modeling posits generic mental models as productive in conceptual change. The second source, adaptive modeling, derives from a computational theory of creative design. Both theories posit situation independent domain abstractions, i.e. generic models. Using a constructive modeling interpretation of the reasoning exhibited in protocols collected by John Clement (1989) of a problem solving session involving conceptual change, we employ the resources of the theory of adaptive modeling to develop a new computational model, ToRQUE. Here we describe a piece of our analysis of the protocol to illustrate how our synthesis of the two theories is being used to develop a system for articulating and testing ToRQUE. The results of our research show how generic modeling plays a central role in conceptual change. They also demonstrate how such an interdisciplinary synthesis can provide significant insights into scientific reasoning. ",
+ "neighbors": [
+ 644,
+ 761
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 469,
+ "label": 2,
+ "text": "Title: Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain \nAbstract: We present the information-theoretic derivation of a learning algorithm that clusters unlabelled data with linear discriminants. In contrast to methods that try to preserve information about the input patterns, we maximize the information gained from observing the output of robust binary discriminators implemented with sigmoid nodes. We derive a local weight adaptation rule via gradient ascent in this objective, demonstrate its dynamics on some simple data sets, relate our approach to previous work and suggest directions in which it may be extended.",
+ "neighbors": [
+ 207,
+ 422
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 470,
+ "label": 2,
+ "text": "Title: A Self-Adjusting Dynamic Logic Module \nAbstract: This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on Adaptive Algorithm 2 (AA2) and details its architecture and learning algorithm. AA2 has significant memory and knowledge maintenance advantages over previous ASOCS models. An ASOCS can operate in either a data processing mode or a learning mode. During learning mode, the ASOCS is given a new rule expressed as a boolean conjunction. The AA2 learning algorithm incorporates the new rule in a distributed fashion in a short, bounded time. During data processing mode, the ASOCS acts as a parallel hardware circuit. ",
+ "neighbors": [
+ 171,
+ 472,
+ 473,
+ 527,
+ 614,
+ 641,
+ 670,
+ 685,
+ 690,
+ 903,
+ 912
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 471,
+ "label": 2,
+ "text": "Title: MIXED MEMORY MARKOV MODELS FOR TIME SERIES ANALYSIS \nAbstract: This paper presents a method for analyzing coupled time series using Markov models in a domain where the state space is immense. To make the parameter estimation tractable, the large state space is represented as the Cartesian product of smaller state spaces, a paradigm known as factorial Markov models. The transition matrix for this model is represented as a mixture of the transition matrices of the underlying dynamical processes. This formulation is know as mixed memory Markov models. Using this framework, we analyze the daily exchange rates for five currencies - British pound, Canadian dollar, Deutsch mark, Japanese yen, and Swiss franc as measured against the U.S. dollar.",
+ "neighbors": [
+ 722,
+ 951
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 472,
+ "label": 2,
+ "text": "Title: Word Perfect Corp. A TRANSFORMATION FOR IMPLEMENTING EFFICIENT DYNAMIC BACKPROPAGATION NEURAL NETWORKS \nAbstract: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing distributed feedforward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient sup port for adding and deleting nodes dynamically during learning. In particular, this paper presents an LIT for standard Backpropagation with two layers of weights, and shows how dynamic extensions to Backpropagation can be supported. ",
+ "neighbors": [
+ 470,
+ 473,
+ 614,
+ 753
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 473,
+ "label": 2,
+ "text": "Title: A VLSI Implementation of a Parallel, Self-Organizing Learning Model \nAbstract: This paper presents a VLSI implementation of the Priority Adaptive Self-Organizing Concurrent System (PASOCS) learning model that is built using a multi-chip module (MCM) substrate. Many current hardware implementations of neural network learning models are direct implementations of classical neural network structures|a large number of simple computing nodes connected by a dense number of weighted links. PASOCS is one of a class of ASOCS (Adaptive Self-Organizing Concurrent System) connectionist models whose overall goal is the same as classical neural networks models, but whose functional mechanisms differ significantly. This model has potential application in areas such as pattern recognition, robotics, logical inference, and dynamic control. ",
+ "neighbors": [
+ 470,
+ 472,
+ 614,
+ 641,
+ 738
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 474,
+ "label": 0,
+ "text": "Title: Case-Based Similarity Assessment: Estimating Adaptability from Experience \nAbstract: Case-based problem-solving systems rely on similarity assessment to select stored cases whose solutions are easily adaptable to fit current problems. However, widely-used similarity assessment strategies, such as evaluation of semantic similarity, can be poor predictors of adaptability. As a result, systems may select cases that are difficult or impossible for them to adapt, even when easily adaptable cases are available in memory. This paper presents a new similarity assessment approach which couples similarity judgments directly to a case library containing the system's adaptation knowledge. It examines this approach in the context of a case-based planning system that learns both new plans and new adaptations. Empirical tests of alternative similarity assessment strategies show that this approach enables better case selection and increases the benefits accrued from learned adaptations. ",
+ "neighbors": [
+ 475,
+ 476,
+ 638,
+ 679,
+ 769
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 475,
+ "label": 0,
+ "text": "Title: Learning to Integrate Multiple Knowledge Sources for Case-Based Reasoning \nAbstract: The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources|cases, case adaptation knowledge, and similarity information|in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning.",
+ "neighbors": [
+ 339,
+ 474,
+ 476,
+ 638,
+ 679,
+ 681
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 476,
+ "label": 0,
+ "text": "Title: A Case Study of Case-Based CBR \nAbstract: Case-based reasoning depends on multiple knowledge sources beyond the case library, including knowledge about case adaptation and criteria for similarity assessment. Because hand coding this knowledge accounts for a large part of the knowledge acquisition burden for developing CBR systems, it is appealing to acquire it by learning, and CBR is a promising learning method to apply. This observation suggests developing case-based CBR systems, CBR systems whose components themselves use CBR. However, despite early interest in case-based approaches to CBR, this method has received comparatively little attention. Open questions include how case-based components of a CBR system should be designed, the amount of knowledge acquisition effort they require, and their effectiveness. This paper investigates these questions through a case study of issues addressed, methods used, and results achieved by a case-based planning system that uses CBR to guide its case adaptation and similarity assessment. The paper discusses design considerations and presents empirical results that support the usefulness of case-based CBR, that point to potential problems and tradeoffs, and that directly demonstrate the overlapping roles of different CBR knowledge sources. The paper closes with general lessons about case-based CBR and areas for future research.",
+ "neighbors": [
+ 474,
+ 475,
+ 679,
+ 681
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 477,
+ "label": 2,
+ "text": "Title: Massive Data Discrimination via Linear Support Vector Machines \nAbstract: A linear support vector machine formulation is used to generate a fast, finitely-terminating linear-programming algorithm for discriminating between two massive sets in n-dimensional space, where the number of points can be orders of magnitude larger than n. The algorithm creates a succession of sufficiently small linear programs that separate chunks of the data at a time. The key idea is that a small number of support vectors, corresponding to linear programming constraints with positive dual variables, are carried over between the successive small linear programs, each of which containing a chunk of the data. We prove that this procedure is monotonic and terminates in a finite number of steps at an exact solution that leads to a globally optimal separating plane for the entire dataset. Numerical results on fully dense publicly available datasets, numbering 20,000 to 1 million points in 32-dimensional space, confirm the theoretical results and demonstrate the ability to handle very large problems.",
+ "neighbors": [
+ 354,
+ 782
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 478,
+ "label": 0,
+ "text": "Title: Merge Strategies for Multiple Case Plan Replay \nAbstract: Planning by analogical reasoning is a learning method that consists of the storage, retrieval, and replay of planning episodes. Planning performance improves with the accumulation and reuse of a library of planning cases. Retrieval is driven by domain-dependent similarity metrics based on planning goals and scenarios. In complex situations with multiple goals, retrieval may find multiple past planning cases that are jointly similar to the new planning situation. This paper presents the issues and implications involved in the replay of multiple planning cases, as opposed to a single one. Multiple case plan replay involves the adaptation and merging of the annotated derivations of the planning cases. Several merge strategies for replay are introduced that can process with various forms of eagerness the differences between the past and new situations and the annotated justifications at the planning cases. In particular, we introduce an effective merging strategy that considers plan step choices especially appropriate for the interleaving of planning and plan execution. We illustrate and discuss the effectiveness of the merging strategies in specific domains.",
+ "neighbors": [
+ 681,
+ 906
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 479,
+ "label": 0,
+ "text": "Title: Towards Mixed-Initiative Rationale-Supported Planning \nAbstract: This paper introduces our work on mixed-initiative, rationale-supported planning. The work centers on the principled reuse and modification of past plans by exploiting their justification structure. The goal is to record as much as possible of the rationale underlying each planning decision in a mixed-initiative framework where human and machine planners interact. This rationale is used to determine which past plans are relevant to a new situation, to focus user's modification and replanning on different relevant steps when external circumstances dictate, and to ensure consistency in multi-user distributed scenarios. We build upon our previous work in Prodigy/Analogy, which incorporates algorithms to capture and reuse the rationale of an automated planner during its plan generation. To support a mixed-initiative environment, we have developed user interactive capabilities in the Prodigy planning and learning system. We are also working towards the integration of the rationale-supported plan reuse in Prodigy/Analogy with the plan retrieval and modification tools of ForMAT. Finally, we have focused on the user's input into the process of plan reuse, in particular when conditional planning is needed. ",
+ "neighbors": [
+ 681
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 480,
+ "label": 2,
+ "text": "Title: AN ANYTIME APPROACH TO CONNECTIONIST THEORY REFINEMENT: REFINING THE TOPOLOGIES OF KNOWLEDGE-BASED NEURAL NETWORKS \nAbstract: We present two algorithms for inducing structural equation models from data. Assuming no latent variables, these models have a causal interpretation and their parameters may be estimated by linear multiple regression. Our algorithms are comparable with PC [15] and IC [12, 11], which rely on conditional independence. We present the algorithms and empirical comparisons with PC and IC. ",
+ "neighbors": [
+ 792,
+ 809
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 481,
+ "label": 0,
+ "text": "Title: Beyond predictive accuracy: what? \nAbstract: Today's potential users of machine learning technology are faced with the non-trivial problem of choosing, from the large, ever-increasing number of available tools, the one most appropriate for their particular task. To assist the often non-initiated users, it is desirable that this model selection process be automated. Using experience from base level learning, researchers have proposed meta-learning as a possible solution. Historically, predictive accuracy has been the de facto criterion, with most work in meta-learning focusing on the discovery of rules that match applications to models based on accuracy only. Although predictive accuracy is clearly an important criterion, it is also the case that there are a number of other criteria that could, and often ought to, be considered when learning about model selection. This paper presents a number of such criteria and discusses the impact they have on meta-level approaches to model selection.",
+ "neighbors": [
+ 514,
+ 1171
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 482,
+ "label": 1,
+ "text": "Title: Graph Coloring with Adaptive Evolutionary Algorithms \nAbstract: This paper presents the results of an experimental investigation on solving graph coloring problems with Evolutionary Algorithms (EA). After testing different algorithm variants we conclude that the best option is an asexual EA using order-based representation and an adaptation mechanism that periodically changes the fitness function during the evolution. This adaptive EA is general, using no domain specific knowledge, except, of course, from the decoder (fitness function). We compare this adaptive EA to a powerful traditional graph coloring technique DSatur and the Grouping GA on a wide range of problem instances with different size, topology and edge density. The results show that the adaptive EA is superior to the Grouping GA and outperforms DSatur on the hardest problem instances. Furthermore, it scales up better with the problem size than the other two algorithms and indicates a linear computational complexity. ",
+ "neighbors": [
+ 415,
+ 683,
+ 844,
+ 984
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 483,
+ "label": 2,
+ "text": "Title: Simple Neuron Models for Independent Component Analysis \nAbstract: Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.",
+ "neighbors": [
+ 335,
+ 609,
+ 845,
+ 992
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 484,
+ "label": 0,
+ "text": "Title: Case-Based Acquisition of Place Knowledge \nAbstract: In this paper we define the task of place learning and describe one approach to this problem. The framework represents distinct places using evidence grids, a probabilistic description of occupancy. Place recognition relies on case-based classification, augmented by a registration process to correct for translations. The learning mechanism is also similar to that in case-based systems, involving the simple storage of inferred evidence grids. Experimental studies with both physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, and that it scales well to increasing numbers of places. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used the experimental methods of machine learning to evaluate their methods' abilities. ",
+ "neighbors": [
+ 400
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 485,
+ "label": 6,
+ "text": "Title: Unsupervised Constructive Learning \nAbstract: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This is useful when the initial representation is inadequate or inappropriate. In this paper, I argue that the distinction between constructive and non-constructive methods is unclear. I propose a theoretical model which allows (a) a clean distinction to be made and (b) the process of CI to be properly motivated. I also show that although constructive induction has been used almost exclusively in the context of supervised learning, there is no reason why it cannot form a part of an unsupervised regime.",
+ "neighbors": [
+ 216,
+ 239,
+ 710,
+ 892
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 486,
+ "label": 5,
+ "text": "Title: Inductive Database Design \nAbstract: When designing a (deductive) database, the designer has to decide for each predicate (or relation) whether it should be defined extensionally or intensionally, and what the definition should look like. An intelligent system is presented to assist the designer in this task. It starts from an example database in which all predicates are defined extensionally. It then tries to compact the database by transforming extensionally defined predicates into intensionally defined ones. The intelligent system employs techniques from the area of inductive logic programming. ",
+ "neighbors": [
+ 573,
+ 938
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 487,
+ "label": 2,
+ "text": "Title: Signal Separation by Nonlinear Hebbian Learning \nAbstract: When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework.",
+ "neighbors": [
+ 32,
+ 335,
+ 506,
+ 609,
+ 845
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 488,
+ "label": 3,
+ "text": "Title: Bayesian inference for nondecomposable graphical Gaussian models \nAbstract: In this paper we propose a method to calculate the posterior probability of a nondecomposable graphical Gaussian model. Our proposal is based on a new device to sample from Wishart distributions, conditional on the graphical constraints. As a result, our methodology allows Bayesian model selection within the whole class of graphical Gaussian models, including nondecomposable ones.",
+ "neighbors": [
+ 698,
+ 1294
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 489,
+ "label": 4,
+ "text": "Title: Locally Weighted Learning for Control \nAbstract: Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control. ",
+ "neighbors": [
+ 334
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 490,
+ "label": 1,
+ "text": "Title: Evolving Compact Solutions in Genetic Programming: A Case Study \nAbstract: Genetic programming (GP) is a variant of genetic algorithms where the data structures handled are trees. This makes GP especially useful for evolving functional relationships or computer programs, as both can be represented as trees. Symbolic regression is the determination of a function dependence y = g(x) that approximates a set of data points (x i ; y i ). In this paper the feasibility of symbolic regression with GP is demonstrated on two examples taken from different domains. Furthermore several suggested methods from literature are compared that are intended to improve GP performance and the readability of solutions by taking into account introns or redundancy that occurs in the trees and keeping the size of the trees small. The experiments show that GP is an elegant and useful tool to derive complex functional dependencies on numerical data.",
+ "neighbors": [
+ 28,
+ 542,
+ 667,
+ 978
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 491,
+ "label": 1,
+ "text": "Title: Evolving Visually Guided Robots \nAbstract: A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. ",
+ "neighbors": [
+ 123,
+ 439,
+ 454,
+ 853,
+ 1143,
+ 1156
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 492,
+ "label": 6,
+ "text": "Title: A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with\nAbstract: We give an analysis of the generalization error of cross validation in terms of two natural measures of the difficulty of the problem under consideration: the approximation rate (the accuracy to which the target function can be ideally approximated as a function of the number of hypothesis parameters), and the estimation rate (the deviation between the training and generalization errors as a function of the number of hypothesis parameters). The approximation rate captures the complexity of the target function with respect to the hypothesis model, and the estimation rate captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of cross validation. The bound clearly shows the tradeoffs involved with making fl the fraction of data saved for testing too large or too small. By optimizing the bound with respect to fl, we then argue (through a combination of formal analysis, plotting, and controlled experimentation) that the following qualitative properties of cross validation behavior should be quite robust to significant changes in the underlying model selection problem: ",
+ "neighbors": [
+ 493,
+ 556,
+ 592,
+ 818
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 493,
+ "label": 6,
+ "text": "Title: An Experimental and Theoretical Comparison of Model Selection Methods on simple model selection problems, the\nAbstract: We investigate the problem of model selection in the setting of supervised learning of boolean functions from independent random examples. More precisely, we compare methods for finding a balance between the complexity of the hypothesis chosen and its observed error on a random training sample of limited size, when the goal is that of minimizing the resulting generalization error. We undertake a detailed comparison of three well-known model selection methods | a variation of Vapnik's Guaranteed Risk Minimization (GRM), an instance of Rissanen's Minimum Description Length Principle (MDL), and cross validation (CV). We introduce a general class of model selection methods (called penalty-based methods) that includes both GRM and MDL, and provide general methods for analyzing such rules. We provide both controlled experimental evidence and formal theorems to support the following conclusions: * The class of penalty-based methods is fundamentally handicapped in the sense that there exist two types of model selection problems for which every penalty-based method must incur large generalization error on at least one, while CV enjoys small generalization error Despite the inescapable incomparability of model selection methods under certain circumstances, we conclude with a discussion of our belief that the balance of the evidence provides specific reasons to prefer CV to other methods, unless one is in possession of detailed problem-specific information. on both.",
+ "neighbors": [
+ 346,
+ 492,
+ 556,
+ 592,
+ 686,
+ 785,
+ 818,
+ 899
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 494,
+ "label": 3,
+ "text": "Title: Bayesian Probability Theory A General Method for Machine Learning \nAbstract: This paper argues that Bayesian probability theory is a general method for machine learning. From two well-founded axioms, the theory is capable of accomplishing learning tasks that are incremental or non-incremental, supervised or unsupervised. It can learn from different types of data, regardless of whether they are noisy or perfect, independent facts or behaviors of an unknown machine. These capabilities are (partially) demonstrated in the paper through the uniform application of the theory to two typical types of machine learning: incremental concept learning and unsupervised data classification. The generality of the theory suggests that the process of learning may not have so many different \"types\" as currently held, and the method that is the oldest may be the best after all. ",
+ "neighbors": [
+ 832
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 495,
+ "label": 3,
+ "text": "Title: Bayesian Models for Non-Linear Autoregressions \nAbstract: This paper argues that Bayesian probability theory is a general method for machine learning. From two well-founded axioms, the theory is capable of accomplishing learning tasks that are incremental or non-incremental, supervised or unsupervised. It can learn from different types of data, regardless of whether they are noisy or perfect, independent facts or behaviors of an unknown machine. These capabilities are (partially) demonstrated in the paper through the uniform application of the theory to two typical types of machine learning: incremental concept learning and unsupervised data classification. The generality of the theory suggests that the process of learning may not have so many different \"types\" as currently held, and the method that is the oldest may be the best after all. ",
+ "neighbors": [
+ 578,
+ 923
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 496,
+ "label": 6,
+ "text": "Title: Error-Correcting Output Codes for Local Learners \nAbstract: Error-correcting output codes (ECOCs) represent classes with a set of output bits, where each bit encodes a binary classification task corresponding to a unique partition of the classes. Algorithms that use ECOCs learn the function corresponding to each bit, and combine them to generate class predictions. ECOCs can reduce both variance and bias errors for multiclass classification tasks when the errors made at the output bits are not correlated. They work well with algorithms that eagerly induce global classifiers (e.g., C4.5) but do not assist simple local classifiers (e.g., nearest neighbor), which yield correlated predictions across the output bits. We show that the output bit predictions of local learners can be decorrelated by selecting different features for each bit. We present promising empirical results for this combination of ECOCs, near est neighbor, and feature selection.",
+ "neighbors": [
+ 582,
+ 601,
+ 956,
+ 1245
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 497,
+ "label": 1,
+ "text": "Title: A Comparison of Random Search versus Genetic Programming as Engines for Collective Adaptation \nAbstract: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine.",
+ "neighbors": [
+ 664,
+ 692,
+ 1311
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 498,
+ "label": 3,
+ "text": "Title: Hierarchical priors and mixture models, with application in regression and density estimation \nAbstract: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine.",
+ "neighbors": [
+ 750,
+ 923
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 499,
+ "label": 1,
+ "text": "Title: Hierarchical priors and mixture models, with application in regression and density estimation \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
+ "neighbors": [
+ 91,
+ 462,
+ 579,
+ 652
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 500,
+ "label": 0,
+ "text": "Title: MULTISTRATEGY LEARNING IN REACTIVE CONTROL SYSTEMS FOR AUTONOMOUS ROBOTIC NAVIGATION \nAbstract: This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.",
+ "neighbors": [
+ 163,
+ 328,
+ 566,
+ 617,
+ 1088,
+ 1198
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 501,
+ "label": 1,
+ "text": "Title: A Study in Program Response and the Negative Effects of Introns in Genetic Programming \nAbstract: The standard method of obtaining a response in tree-based genetic programming is to take the value returned by the root node. In non-tree representations, alternate methods have been explored. One alternative is to treat a specific location in indexed memory as the response value when the program terminates. The purpose of this paper is to explore the applicability of this technique to tree-structured programs and to explore the intron effects that these studies bring to light. This paper's experimental results support the finding that this memory-based program response technique is an improvement for some, but not all, problems. In addition, this paper's experimental results support the finding that, contrary to past research and speculation, the addition or even facilitation of introns can seriously degrade the search performance of genetic programming.",
+ "neighbors": [
+ 542,
+ 667,
+ 1034,
+ 1047
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 502,
+ "label": 6,
+ "text": "Title: ON THE SAMPLE COMPLEXITY OF FINDING GOOD SEARCH STRATEGIES 2n trials of each undetermined experiment\nAbstract: A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, without repetitions, until a satisfying configuration of successes and failures has been reached. The cost of performing the experiments depends on the order chosen. Earlier work has concentrated on finding optimal search strategies in special cases of this model, such as search trees and and-or graphs, when the cost function and the success probabilities for the experiments are given. In contrast, we study the complexity of \"learning\" an approximately optimal search strategy when some of the success probabilities are not known at the outset. Working in the fully general model, we show that if n is the number of unknown probabilities, and C is the maximum cost of performing all the experiments, then ",
+ "neighbors": [
+ 144,
+ 541
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 503,
+ "label": 2,
+ "text": "Title: Comparison of Neural and Statistical Classifiers| Theory and Practice \nAbstract: Research Reports A13 January 1996 ",
+ "neighbors": [
+ 24,
+ 40,
+ 387
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 504,
+ "label": 3,
+ "text": "Title: Efficient Stochastic Source Coding and an Application to a Bayesian Network Source Model \nAbstract: Brendan J. Frey and Geoffrey E. Hinton 1997. Efficient stochastic source coding and an application to a Bayesian network source model. The Computer Journal 40, 157-165. In this paper, we introduce a new algorithm called \"bits-back coding\" that makes stochastic source codes efficient. For a given one-to-many source code, we show that this algorithm can actually be more efficient than the algorithm that always picks the shortest codeword. Optimal efficiency is achieved when codewords are chosen according to the Boltzmann distribution based on the codeword lengths. It turns out that a commonly used technique for determining parameters | maximum likelihood estimation | actually minimizes the bits-back coding cost when codewords are chosen according to the Boltzmann distribution. A tractable approximation to maximum likelihood estimation | the generalized expectation maximization algorithm | minimizes the bits-back coding cost. After presenting a binary Bayesian network model that assigns exponentially many codewords to each symbol, we show how a tractable approximation to the Boltzmann distribution can be used for bits-back coding. We illustrate the performance of bits-back coding using using nonsynthetic data with a binary Bayesian network source model that produces 2 60 possible codewords for each input symbol. The rate for bits-back coding is nearly one half of that obtained by picking the shortest codeword for each symbol. ",
+ "neighbors": [
+ 42,
+ 773
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 505,
+ "label": 2,
+ "text": "Title: A Blind Identification and Separation Technique via Multi-layer Neural Networks \nAbstract: This paper deals with the problem of blind identification and source separation which consists of estimation of the mixing matrix and/or the separation of a mixture of stochastically independent sources without a priori knowledge on the mixing matrix . The method we propose here estimates the mixture matrix by a recurrent Input-Output (IO) Identification using as inputs a nonlinear transformation of the estimated sources. Herein, the nonlinear transformation (distortion) consists in constraining the modulus of the inputs of the IO-Identification device to be a constant. In contrast to other existing approaches, the covariance of the additive noise do not need to be modeled and can be estimated as a regular parameter if needed. The proposed approach is implemented using multi-layer neural networks in order to improve performance of separation. New associated on-line un-supervised adaptive learning rules are also developed. The effectiveness of the proposed method is illustrated by some computer simulations. ",
+ "neighbors": [
+ 32,
+ 331,
+ 845
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 506,
+ "label": 2,
+ "text": "Title: LOCAL ADAPTIVE LEARNING ALGORITHMS FOR BLIND SEPARATION OF NATURAL IMAGES \nAbstract: In this paper a neural network approach for reconstruction of natural highly correlated images from linear (additive) mixture of them is proposed. A multi-layer architecture with local on-line learning rules is developed to solve the problem of blind separation of sources. The main motivation for using a multi-layer network instead of a single-layer one is to improve the performance and robustness of separation, while applying a very simple local learning rule, which is biologically plausible. Moreover such architecture with on-chip learning is relatively easy implementable using VLSI electronic circuits. Furthermore it enables the extraction of source signals sequentially one after the other, starting from the strongest signal and finishing with the weakest one. The experimental part focuses on separating highly correlated human faces from mixture of them, with additive noise and under unknown number of sources. ",
+ "neighbors": [
+ 32,
+ 331,
+ 335,
+ 487,
+ 845
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 507,
+ "label": 4,
+ "text": "Title: LEARNING TO SOLVE MARKOVIAN DECISION PROCESSES \nAbstract: In this paper a neural network approach for reconstruction of natural highly correlated images from linear (additive) mixture of them is proposed. A multi-layer architecture with local on-line learning rules is developed to solve the problem of blind separation of sources. The main motivation for using a multi-layer network instead of a single-layer one is to improve the performance and robustness of separation, while applying a very simple local learning rule, which is biologically plausible. Moreover such architecture with on-chip learning is relatively easy implementable using VLSI electronic circuits. Furthermore it enables the extraction of source signals sequentially one after the other, starting from the strongest signal and finishing with the weakest one. The experimental part focuses on separating highly correlated human faces from mixture of them, with additive noise and under unknown number of sources. ",
+ "neighbors": [
+ 214,
+ 320
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 508,
+ "label": 6,
+ "text": "Title: Using and combining predictors that specialize \nAbstract: We study online learning algorithms that predict by combining the predictions of several subordinate prediction algorithms, sometimes called experts. These simple algorithms belong to the multiplicative weights family of algorithms. The performance of these algorithms degrades only logarithmically with the number of experts, making them particularly useful in applications where the number of experts is very large. However, in applications such as text categorization, it is often natural for some of the experts to abstain from making predictions on some of the instances. We show how to transform algorithms that assume that all experts are always awake to algorithms that do not require this assumption. We also show how to derive corresponding loss bounds. Our method is very general, and can be applied to a large family of online learning algorithms. We also give applications to various prediction models including decision graphs and switching experts. ",
+ "neighbors": [
+ 255,
+ 586
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 509,
+ "label": 5,
+ "text": "Title: Naive Bayesian classifier within ILP-R \nAbstract: When dealing with the classification problems, current ILP systems often lag behind state-of-the-art attributional learners. Part of the blame can be ascribed to a much larger hypothesis space which, therefore, cannot be as thoroughly explored. However, sometimes it is due to the fact that ILP systems do not take into account the probabilistic aspects of hypotheses when classifying unseen examples. This paper proposes just that. We developed a naive Bayesian classifier within our ILP-R first order learner. The learner itself uses a clever RELIEF based heuristic which is able to detect strong dependencies within the literal space when such dependencies exist. We conducted a series of experiments on artificial and real-world data sets. The results show that the combination of ILP-R together with the naive Bayesian classifier sometimes significantly improves the classification of unseen instances as measured by both classification accuracy and average information score.",
+ "neighbors": [
+ 576,
+ 875,
+ 882,
+ 921
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 510,
+ "label": 2,
+ "text": "Title: Proben1 A Set of Neural Network Benchmark Problems and Benchmarking Rules \nAbstract: Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. ",
+ "neighbors": [
+ 40,
+ 64,
+ 332,
+ 635,
+ 674,
+ 789,
+ 1155,
+ 1194,
+ 1239,
+ 1245
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 511,
+ "label": 4,
+ "text": "Title: Learning To Play the Game of Chess \nAbstract: This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.",
+ "neighbors": [
+ 300,
+ 327,
+ 334,
+ 776
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 512,
+ "label": 0,
+ "text": "Title: A Preprocessing Model for Integrating CBR and Prototype-Based Neural Networks \nAbstract: Some important factors that play a major role in determining the performances of a CBR (Case-Based Reasoning) system are the complexity and the accuracy of the retrieval phase. Both flat memory and inductive approaches suffer from serious drawbacks. In the first approach, the search time increases when dealing with large scale memory base, while in the second one the modification of the case memory becomes very complex because of its sophisticated architecture. In this paper, we show how we construct a simple efficient indexing system structure. The idea is to construct a case hierarchy with two levels of memory: the lower level contains cases organised into groups of similar cases, while the upper level contains prototypes. each prototype represents one group of cases. This smaller memory is used during the retrieval phase. Prototype construction is achieved by means of an incremental prototype-based NN (Neural Network). We show that this mode of CBR-NN coupling is a preprocessing one where the neural network serves as an indexing system to the ",
+ "neighbors": [
+ 678,
+ 928
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 513,
+ "label": 6,
+ "text": "Title: PAC Learning of One-Dimensional Patterns \nAbstract: Developing the ability to recognize a landmark from a visual image of a robot's current location is a fundamental problem in robotics. We consider the problem of PAC-learning the concept class of geometric patterns where the target geometric pattern is a configuration of k points on the real line. Each instance is a configuration of n points on the real line, where it is labeled according to whether or not it visually resembles the target pattern. To capture the notion of visual resemblance we use the Hausdorff metric. Informally, two geometric patterns P and Q resemble each other under the Hausdorff metric, if every point on one pattern is \"close\" to some point on the other pattern. We relate the concept class of geometric patterns to the landmark recognition problem and then present a polynomial-time algorithm that PAC-learns the class of one-dimensional geometric patterns. We also present some experimental results on how our algorithm performs. ",
+ "neighbors": [
+ 62
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 514,
+ "label": 6,
+ "text": "Title: Model selection using measure functions \nAbstract: The concept of measure functions for generalization performance is suggested. This concept provides an alternative way of selecting and evaluating learned models (classifiers). In addition, it makes it possible to state a learning problem as a computational problem. The the known prior (meta-)knowledge about the problem domain is captured in a measure function that, to each possible combination of a training set and a classifier, assigns a value describing how good the classifier is. The computational problem is then to find a classifier maximizing the measure function. We argue that measure functions are of great value for practical applications. Besides of being a tool for model selection, they: (i) force us to make explicit the relevant prior knowledge about the learning problem at hand, (ii) provide a deeper understanding of existing algorithms, and (iii) help us in the construction of problem-specific algorithms. We illustrate the last point by suggesting a novel algorithm based on incremental search for a classifier that optimizes a given measure function.",
+ "neighbors": [
+ 98,
+ 481,
+ 747
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 515,
+ "label": 2,
+ "text": "Title: In Search Of Articulated Attractors \nAbstract: Recurrent attractor networks offer many advantages over feed-forward networks for the modeling of psychological phenomena. Their dynamic nature allows them to capture the time course of cognitive processing, and their learned weights may often be easily interpreted as soft constraints between representational components. Perhaps the most significant feature of such networks, however, is their ability to facilitate generalization by enforcing well formedness constraints on intermediate and output representations. Attractor networks which learn the systematic regularities of well formed representations by exposure to a small number of examples are said to possess articulated attractors. This paper investigates the conditions under which articulated attractors arise in recurrent networks trained using variants of backpropagation. The results of computational experiments demonstrate that such structured attrac-tors can spontaneously appear in an emergence of systematic-ity, if an appropriate error signal is presented directly to the recurrent processing elements. We show, however, that distal error signals, backpropagated through intervening weights, pose serious problems for networks of this kind. We present simulation results, discuss the reasons for this difficulty, and suggest some directions for future attempts to surmount it. ",
+ "neighbors": [
+ 727
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 516,
+ "label": 6,
+ "text": "Title: Simplifying Decision Trees: A Survey \nAbstract: Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.",
+ "neighbors": [
+ 564
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 517,
+ "label": 3,
+ "text": "Title: Bounding Convergence Time of the Gibbs Sampler in Bayesian Image Restoration \nAbstract: This paper gives precise, easy to compute bounds on the convergence time of the Gibbs sampler used in Bayesian image reconstruction. For sampling from the Gibbs distribution both with and without the presence of an external field, bounds that are N 2 in the number of pixels are obtained, with a proportionality constant that is easy to calculate. Some key words: Bayesian image restoration; Convergence; Gibbs sampler; Ising model; Markov chain Monte Carlo.",
+ "neighbors": [
+ 21,
+ 518,
+ 947,
+ 1130
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 518,
+ "label": 3,
+ "text": "Title: Possible biases induced by MCMC convergence diagnostics \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 405 3nov96, revised 3jun97 Abstract We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An \"N-heads\" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN ) 2 ) for C chains of N states apiece observing T data points, compared with O(T N 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and robustness to initial conditions, coupled hmms compared favorably with conventional hmms and with energy-based approaches to coupled inference chains. We demonstrate and compare these algorithms on synthetic and real data, including interpretation of video.",
+ "neighbors": [
+ 21,
+ 74,
+ 517,
+ 947,
+ 949
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 519,
+ "label": 5,
+ "text": "Title: LEARNING LOGICAL EXCEPTIONS IN CHESS \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 405 3nov96, revised 3jun97 Abstract We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An \"N-heads\" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN ) 2 ) for C chains of N states apiece observing T data points, compared with O(T N 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and robustness to initial conditions, coupled hmms compared favorably with conventional hmms and with energy-based approaches to coupled inference chains. We demonstrate and compare these algorithms on synthetic and real data, including interpretation of video.",
+ "neighbors": [
+ 661,
+ 708,
+ 724
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 520,
+ "label": 3,
+ "text": "Title: Discretization of continuous Markov chains and MCMC convergence assessment \nAbstract: We show in this paper that continuous state space Markov chains can be rigorously discretized into finite Markov chains. The idea is to subsample the continuous chain at renewal times related to small sets which control the discretization. Once a finite Markov chain is derived from the MCMC output, general convergence properties on finite state spaces can be exploited for convergence assessment in several directions. Our choice is based on a divergence criterion derived from Kemeny and Snell (1960), which is first evaluated on parallel chains with a stopping time, and then implemented, more efficiently, on two parallel chains only, using Birkhoff's pointwise ergodic theorem for stopping rules. The performance of this criterion is illustrated on three standard examples. ",
+ "neighbors": [
+ 266,
+ 772
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 521,
+ "label": 2,
+ "text": "Title: GA-RBF: A Self-Optimising RBF Network \nAbstract: The effects of a neural network's topology on its performance are well known, yet the question of finding optimal configurations automatically remains largely open. This paper proposes a solution to this problem for RBF networks. A self- optimising approach, driven by an evolutionary strategy, is taken. The algorithm uses output information and a computationally efficient approximation of RBF networks to optimise the K-means clustering process by co-evolving the two determinant parameters of the network's layout: the number of centroids and the centroids' positions. Empirical results demonstrate promise. ",
+ "neighbors": [
+ 357,
+ 872,
+ 873,
+ 931
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 522,
+ "label": 1,
+ "text": "Title: Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect Using Learning to Facilitate the\nAbstract: This paper describes a hybrid methodology that integrates genetic algorithms and decision tree learning in order to evolve useful subsets of discriminatory features for recognizing complex visual concepts. A genetic algorithm (GA) is used to search the space of all possible subsets of a large set of candidate discrimination features. Candidate feature subsets are evaluated by using C4.5, a decision-tree learning algorithm, to produce a decision tree based on the given features using a limited amount of training data. The classification performance of the resulting decision tree on unseen testing data is used as the fitness of the underlying feature subset. Experimental results are presented to show how increasing the amount of learning significantly improves feature set evolution for difficult visual recognition problems involving satellite and facial image data. In addition, we also report on the extent to which other more subtle aspects of the Baldwin effect are exhibited by the system. ",
+ "neighbors": [
+ 128,
+ 677,
+ 853,
+ 885
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 523,
+ "label": 1,
+ "text": "Title: Explanations of Empirically Derived Reactive Plans \nAbstract: Given an adequate simulation model of the task environment and payoff function that measures the quality of partially successful plans, competition-based heuristics such as genetic algorithms can develop high performance reactive rules for interesting sequential decision tasks. We have previously described an implemented system, called SAMUEL, for learning reactive plans and have shown that the system can successfully learn rules for a laboratory scale tactical problem. In this paper, we describe a method for deriving explanations to justify the success of such empirically derived rule sets. The method consists of inferring plausible subgoals and then explaining how the reactive rules trigger a sequence of actions (i.e., a stra tegy) to satisfy the subgoals. ",
+ "neighbors": [
+ 529,
+ 553,
+ 554,
+ 555,
+ 661
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 524,
+ "label": 6,
+ "text": "Title: Learning Concepts from Sensor Data of a Mobile Robot \nAbstract: Machine learning can be a most valuable tool for improving the flexibility and efficiency of robot applications. Many approaches to applying machine learning to robotics are known. Some approaches enhance the robot's high-level processing, the planning capabilities. Other approaches enhance the low-level processing, the control of basic actions. In contrast, the approach presented in this paper uses machine learning for enhancing the link between the low-level representations of sensing and action and the high-level representation of planning. The aim is to facilitate the communication between the robot and the human user. A hierarchy of concepts is learned from route records of a mobile robot. Perception and action are combined at every level, i.e., the concepts are perceptually anchored. The relational learning algorithm grdt has been developed which completely searches in a hypothesis space, that is restricted by rule schemata, which the user defines in terms of grammars. ",
+ "neighbors": [
+ 832,
+ 931
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 525,
+ "label": 3,
+ "text": "Title: Compositional Modeling With DPNs \nAbstract: We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.",
+ "neighbors": [
+ 321,
+ 560,
+ 722,
+ 783
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 526,
+ "label": 2,
+ "text": "Title: Memory-based Time Series Recognition A New Methodology and Real World Applications \nAbstract: We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.",
+ "neighbors": [
+ 40,
+ 582
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 527,
+ "label": 2,
+ "text": "Title: Eclectic Machine Learning \nAbstract: For a target tracking task, the hand-held camera of the anthropomorphic OSCAR-robot manipulator has to track an object which moves arbitrarily on a table. The desired camera-joint mapping is approximated by a feedforward neural network. Through the use of time derivatives of the position of the object and of the manipulator, the controller can inherently predict the next position of the moving target object. In this paper several `anticipative' controllers are described, and successfully applied to track a moving object.",
+ "neighbors": [
+ 470
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 528,
+ "label": 3,
+ "text": "Title: Regression Can Build Predictive Causal Models \nAbstract: Covariance information can help an algorithm search for predictive causal models and estimate the strengths of causal relationships. This information should not be discarded after conditional independence constraints are identified, as is usual in contemporary causal induction algorithms. Our fbd algorithm combines covariance information with an effective heuristic to build predictive causal models. We demonstrate that fbd is accurate and efficient. In one experiment we assess fbd's ability to find the best predictors for variables; in another we compare its performance, using many measures, with Pearl and Verma's ic algorithm. And although fbd is based on multiple linear regression, we cite evidence that it performs well on problems that are very difficult for regression algorithms. ",
+ "neighbors": [
+ 531,
+ 850,
+ 1027
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 529,
+ "label": 1,
+ "text": "Title: Learning Sequential Decision Rules Using Simulation Models and Competition \nAbstract: The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. ",
+ "neighbors": [
+ 91,
+ 300,
+ 327,
+ 523,
+ 553,
+ 555,
+ 563,
+ 634,
+ 642,
+ 704,
+ 734,
+ 824,
+ 878,
+ 888
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 530,
+ "label": 2,
+ "text": "Title: Statistical Ideas for Selecting Network Architectures \nAbstract: Choosing the architecture of a neural network is one of the most important problems in making neural networks practically useful, but accounts of applications usually sweep these details under the carpet. How many hidden units are needed? Should weight decay be used, and if so how much? What type of output units should be chosen? And so on. We address these issues within the framework of statistical theory for model This paper is principally concerned with architecture selection issues for feed-forward neural networks (also known as multi-layer perceptrons). Many of the same issues arise in selecting radial basis function networks, recurrent networks and more widely. These problems occur in a much wider context within statistics, and applied statisticians have been selecting and combining models for decades. Two recent discussions are [4, 5]. References [3, 20, 21, 22] discuss neural networks from a statistical perspective. choice, which provides a number of workable approximate answers.",
+ "neighbors": [
+ 650,
+ 651,
+ 698
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 531,
+ "label": 3,
+ "text": "Title: A Statistical Semantics for Causation Key words: causality, induction, learning \nAbstract: We propose a model-theoretic definition of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covari-ations following standard norms of inductive reasoning. We also establish a complete characterization of the conditions under which such a distinction is possible. Finally, we provide a proof-theoretical procedure for inductive causation and show that, for a large class of data and structures, effective algorithms exist that uncover the direction of causal influences as defined above.",
+ "neighbors": [
+ 528
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 532,
+ "label": 2,
+ "text": "Title: All-to-all Broadcast on the CNS-1 \nAbstract: This study deals with the all-to-all broadcast on the CNS-1. We determine a lower bound for the run time and present an algorithm meeting this bound. Since this study points out a bottleneck in the network interface, we also analyze the performance of alternative interface designs. Our analyses are based on a run time model of the network. ",
+ "neighbors": [
+ 158
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 533,
+ "label": 3,
+ "text": "Title: Abstract \nAbstract: Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context-sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework.",
+ "neighbors": [
+ 660
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 534,
+ "label": 2,
+ "text": "Title: A comparison of some error estimates for neural network models Summary \nAbstract: We discuss a number of methods for estimating the standard error of predicted values from a multi-layer perceptron. These methods include the delta method based on the Hessian, bootstrap estimators, and the \"sandwich\" estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights. ",
+ "neighbors": [
+ 86,
+ 240,
+ 1226,
+ 1227
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 535,
+ "label": 3,
+ "text": "Title: Practical Bayesian Inference Using Mixtures of Mixtures \nAbstract: Discrete mixtures of normal distributions are widely used in modeling amplitude fluctuations of electrical potentials at synapses of human, and other animal nervous systems. The usual framework has independent data values y j arising as y j = j + x n 0 +j where the means j come from some discrete prior G() and the unknown x n 0 +j 's and observed x j ; j = 1; : : : ; n 0 are gaussian noise terms. A practically important development of the associated statistical methods is the issue of non-normality of the noise terms, often the norm rather than the exception in the neurological context. We have recently developed models, based on convolutions of Dirichlet process mixtures, for such problems. Explicitly, we model the noise data values x j as arising from a Dirich-let process mixture of normals, in addition to modeling the location prior G() as a Dirichlet process itself. This induces a Dirichlet mixture of mixtures of normals, whose analysis may be developed using Gibbs sampling techniques. We discuss these models and their analysis, and illustrate in the context of neurological response analysis. ",
+ "neighbors": [
+ 750
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 536,
+ "label": 0,
+ "text": "Title: Dynamic Constraint Satisfaction using Case-Based Reasoning Techniques \nAbstract: The Dynamic Constraint Satisfaction Problem (DCSP) formalism has been gaining attention as a valuable and often necessary extension of the static CSP framework. Dynamic Constraint Satisfaction enables CSP techniques to be applied more extensively, since it can be applied in domains where the set of constraints and variables involved in the problem evolves with time. At the same time, the Case-Based Reasoning (CBR) community has been working on techniques by which to reuse existing solutions when solving new problems. We have observed that dynamic constraint satisfaction matches very closely the case-based reasoning process of case adaptation. These observations emerged from our previous work on combining CBR and CSP to achieve a constraint-based adaptation. This paper summarizes our previous results, describes the similarity of the challenges facing both DCSP and case adaptation, and shows how CSP and CBR can together begin to address these chal lenges.",
+ "neighbors": [
+ 639
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 537,
+ "label": 6,
+ "text": "Title: Quantifying Prior Determination Knowledge using the PAC Learning Model \nAbstract: Prior knowledge, or bias, regarding a concept can speed up the task of learning it. Probably Approximately Correct (PAC) learning is a mathematical model of concept learning that can be used to quantify the speed up due to different forms of bias on learning. Thus far, PAC learning has mostly been used to analyze syntactic bias, such as limiting concepts to conjunctions of boolean prepositions. This paper demonstrates that PAC learning can also be used to analyze semantic bias, such as a domain theory about the concept being learned. The key idea is to view the hypothesis space in PAC learning as that consistent with all prior knowledge, syntactic and semantic. In particular, the paper presents a PAC analysis of determinations, a type of relevance knowledge. The results of the analysis reveal crisp distinctions and relations among different determinations, and illustrate the usefulness of an analysis based on the PAC model. ",
+ "neighbors": [
+ 392,
+ 812,
+ 858
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 538,
+ "label": 3,
+ "text": "Title: In: A Mixture Model System for Medical and Machine Diagnosis \nAbstract: Diagnosis of human disease or machine fault is a missing data problem since many variables are initially unknown. Additional information needs to be obtained. The joint probability distribution of the data can be used to solve this problem. We model this with mixture models whose parameters are estimated by the EM algorithm. This gives the benefit that missing data in the database itself can also be handled correctly. The request for new information to refine the diagnosis is performed using the maximum utility principle. Since the system is based on learning it is domain independent and less labor intensive than expert systems or probabilistic networks. An example using a heart disease database is presented.",
+ "neighbors": [
+ 1253
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 539,
+ "label": 2,
+ "text": "Title: BACKPROPAGATION CAN GIVE RISE TO SPURIOUS LOCAL MINIMA EVEN FOR NETWORKS WITHOUT HIDDEN LAYERS \nAbstract: We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. ",
+ "neighbors": [
+ 467,
+ 705
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 540,
+ "label": 6,
+ "text": "Title: MAJORITY VOTE CLASSIFIERS: THEORY AND APPLICATIONS \nAbstract: We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. ",
+ "neighbors": [
+ 39,
+ 601,
+ 826
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 541,
+ "label": 6,
+ "text": "Title: Learning an Optimally Accurate Representational System \nAbstract: The multiple extension problem arises because a default theory can use different subsets of its defaults to propose different, mutually incompatible, answers to some queries. This paper presents an algorithm that uses a set of observations to learn a credulous version of this default theory that is (essentially) \"optimally accurate\". In more detail, we can associate a given default theory with a set of related credulous theories R = fR i g, where each R i uses its own total ordering of the defaults to determine which single answer to return for each query. Our goal is to select the credulous theory that has the highest \"expected accuracy\", where each R i 's expected accuracy is the probability that the answer it produces to a query will correspond correctly to the world. Unfortunately, a theory's expected accuracy depends on the distribution of queries, which is usually not known. Moreover, the task of identifying the optimal R opt 2 R, even given that distribution information, is intractable. This paper presents a method, OptAcc, that sidesteps these problems by using a set of samples to estimate the unknown distribution, and by hill-climbing to a local optimum. In particular, given any parameters *; ffi > 0, OptAcc produces an R oa 2 R whose expected accuracy is, with probability at least 1 ffi, within * of a local optimum. Appeared in ECAI Workshop on Theoretical Foundations of Knowledge Representation and Reasoning, ",
+ "neighbors": [
+ 144,
+ 502
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 542,
+ "label": 1,
+ "text": "Title: Complexity Compression and Evolution \nAbstract: Compression of information is an important concept in the theory of learning. We argue for the hypothesis that there is an inherent compression pressure towards short, elegant and general solutions in a genetic programming system and other variable length evolutionary algorithms. This pressure becomes visible if the size or complexity of solutions are measured without non-effective code segments called introns. The built in parsimony pressure effects complex fitness functions, crossover probability, generality, maximum depth or length of solutions, explicit parsimony, granularity of fitness function, initialization depth or length, and modulariz-ation. Some of these effects are positive and some are negative. In this work we provide a basis for an analysis of these effects and suggestions to overcome the negative implications in order to obtain the balance needed for successful evolution. An empirical investigation that supports our hypothesis is also presented.",
+ "neighbors": [
+ 218,
+ 490,
+ 501,
+ 575,
+ 760,
+ 910
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 543,
+ "label": 1,
+ "text": "Title: Genetic Programming of Minimal Neural Nets Using Occam's Razor \nAbstract: A genetic programming method is investigated for optimizing both the architecture and the connection weights of multilayer feedforward neural networks. The genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. The weights are trained by a next-ascent hillclimb-ing search. A new fitness function is proposed that quantifies the principle of Occam's razor. It makes an optimal trade-off between the error fitting ability and the parsimony of the network. We discuss the results for two problems of differing complexity and study the convergence and scaling properties of the algorithm.",
+ "neighbors": [
+ 323,
+ 357,
+ 640,
+ 1151
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 544,
+ "label": 2,
+ "text": "Title: A Simple Neural Network Models Categorical Perception of Facial Expressions \nAbstract: The performance of a neural network that categorizes facial expressions is compared with human subjects over a set of experiments using interpolated imagery. The experiments for both the human subjects and neural networks make use of interpolations of facial expressions from the Pictures of Facial Affect Database [Ekman and Friesen, 1976]. The only difference in materials between those used in the human subjects experiments [Young et al., 1997] and our materials are the manner in which the interpolated images are constructed - image-quality morphs versus pixel averages. Nevertheless, the neural network accurately captures the categorical nature of the human responses, showing sharp transitions in labeling of images along the interpolated sequence. Crucially for a demonstration of categorical perception [Harnad, 1987], the model shows the highest discrimination between transition images at the crossover point. The model also captures the shape of the reaction time curves of the human subjects along the sequences. Finally, the network matches human subjects' judgements of which expressions are being mixed in the images. The main failing of the model is that there are intrusions of neutral responses in some transitions, which are not seen in the human subjects. We attribute this difference to the difference between the pixel average stimuli and the image quality morph stimuli. These results show that a simple neural network classifier, with no access to the biological constraints that are presumably imposed on the human emotion processor, and whose only access to the surrounding culture is the category labels placed by American subjects on the facial expressions, can nevertheless simulate fairly well the human responses to emotional expressions. ",
+ "neighbors": [
+ 699
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 545,
+ "label": 1,
+ "text": "Title: Crossover or Mutation? \nAbstract: Genetic algorithms rely on two genetic operators crossover and mutation. Although there exists a large body of conventional wisdom concerning the roles of crossover and mutation, these roles have not been captured in a theoretical fashion. For example, it has never been theoretically shown that mutation is in some sense \"less powerful\" than crossover or vice versa. This paper provides some answers to these questions by theoretically demonstrating that there are some important characteristics of each operator that are not captured by the other.",
+ "neighbors": [
+ 420,
+ 462,
+ 579,
+ 816
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 546,
+ "label": 3,
+ "text": "Title: Structured Representation of Complex Stochastic Systems \nAbstract: This paper considers the problem of representing complex systems that evolve stochastically over time. Dynamic Bayesian networks provide a compact representation for stochastic processes. Unfortunately, they are often unwieldy since they cannot explicitly model the complex organizational structure of many real life systems: the fact that processes are typically composed of several interacting subprocesses, each of which can, in turn, be further decomposed. We propose a hierarchically structured representation language which extends both dynamic Bayesian networks and the object-oriented Bayesian network framework of [9], and show that our language allows us to describe such systems in a natural and modular way. Our language supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. For example, it allows us to represent systems where some processes evolve at a different rate than others, or systems where the processes interact only intermittently. We provide a simple inference mechanism for our representation via translation to Bayesian networks, and suggest ways in which the inference algorithm can exploit the additional structure encoded in our representation. ",
+ "neighbors": [
+ 34,
+ 190,
+ 458,
+ 722,
+ 791
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 547,
+ "label": 2,
+ "text": "Title: Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Casade Correlation and a Simple Solution \nAbstract: It is often difficult to predict the optimal neural network size for a particular application. Constructive or destructive methods that add or subtract neurons, layers, connections, etc. might offer a solution to this problem. We prove that one method, Recurrent Cascade Correlation, due to its topology, has fundamental limitations in representation and thus in its learning capabilities. It cannot represent with monotone (i.e. sigmoid) and hard-threshold activation functions certain finite state automata. We give a \"preliminary\" approach on how to get around these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully-recurrent structure. We illustrate this approach by simulations which learn many examples of regular grammars that the ",
+ "neighbors": [
+ 240
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 548,
+ "label": 0,
+ "text": "Title: An Optimal Weighting Criterion of Case Indexing for Both Numeric and Symbolic Attributes \nAbstract: Indexing of cases is an important topic for Memory-Based Reasoning(MBR). One key problem is how to assign weights to attributes of cases. Although several weighting methods have been proposed, some methods cannot handle numeric attributes directly, so it is necessary to discretize numeric values by classification. Furthermore, existing methods have no theoretical background, so little can be said about optimality. We propose a new weighting method based on a statistical technique called Quantification Method II. It can handle both numeric and symbolic attributes in the same framework. Generated attribute weights are optimal in the sense that they maximize the ratio of variance between classes to variance of all cases. Experiments on several benchmark tests show that in many cases, our method obtains higher accuracies than some other weighting methods. The results also indicate that it can distinguish relevant attributes from irrelevant ones, and can tolerate noisy data. ",
+ "neighbors": [
+ 743
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 549,
+ "label": 2,
+ "text": "Title: An Optimal Weighting Criterion of Case Indexing for Both Numeric and Symbolic Attributes \nAbstract: A General Result on the Stabilization of Linear Systems Using Bounded Controls 1 ABSTRACT We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability rank condition hold. One of the constructions is in terms of a \"neural-network type\" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and saturations. ",
+ "neighbors": [
+ 719,
+ 803
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 550,
+ "label": 0,
+ "text": "Title: Using Case-Based Reasoning to Acquire User Scheduling Preferences that Change over Time \nAbstract: Production/Manufacturing scheduling typically involves the acquisition of user optimization preferences. The ill-structuredness of both the problem space and the desired objectives make practical scheduling problems difficult to formalize and costly to solve, especially when problem configurations and user optimization preferences change over time. This paper advocates an incremental revision framework for improving schedule quality and incorporating user dynamically changing preferences through Case-Based Reasoning. Our implemented system, called CABINS, records situation-dependent tradeoffs and consequences that result from schedule revision to guide schedule improvement. The preliminary experimental results show that CABINS is able to effectively capture both user static and dynamic preferences which are not known to the system and only exist implicitly in a extensional manner in the case base. ",
+ "neighbors": [
+ 866,
+ 1315
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 551,
+ "label": 3,
+ "text": "Title: Some Varieties of Qualitative Probability \nAbstract: ",
+ "neighbors": [
+ 606
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 552,
+ "label": 1,
+ "text": "Title: Modeling Distributed Search via Social Insects \nAbstract: Complex group behavior arises in social insects colonies as the integration of the actions of simple and redundant individual insects [Adler and Gordon, 1992, Oster and Wilson, 1978]. Furthermore, the colony can act as an information center to expedite foraging [Brown, 1989]. We apply these lessons from natural systems to model collective action and memory in a computational agent society. Collective action can expedite search in combinatorial optimization problems [Dorigo et al., 1996]. Collective memory can improve learning in multi-agent systems [Garland and Alterman, 1996]. Our collective adaptation integrates the simplicity of collective action with the pattern detection of collective memory to significantly improve both the gathering and processing of knowledge. As a test of the role of the society as an information center, we examine the ability of the society to distribute task allocation without any omnipotent centralized control. ",
+ "neighbors": [
+ 568,
+ 664,
+ 692,
+ 1311
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 553,
+ "label": 1,
+ "text": "Title: Simulation-Assisted Learning by Competition: Effects of Noise Differences Between Training Model and Target Environment \nAbstract: The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical plans from a simple flight simulator where a plane must avoid a missile. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. Specifically, either the model or the target environment may contain noise. These experiments examine the effect of learning tactical plans without noise and then testing the plans in a noisy environment, and the effect of learning plans in a noisy simulator and then testing the plans in a noise-free environment. Empirical results show that, while best result are obtained when the training model closely matches the target environment, using a training environment that is more noisy than the target environment is better than using using a training environment that has less noise than the target environment. ",
+ "neighbors": [
+ 523,
+ 529,
+ 554,
+ 555,
+ 824
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 554,
+ "label": 1,
+ "text": "Title: Improving Tactical Plans with Genetic Algorithms \nAbstract: ",
+ "neighbors": [
+ 98,
+ 523,
+ 553,
+ 555,
+ 603,
+ 704,
+ 824
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 555,
+ "label": 1,
+ "text": "Title: Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation \nAbstract: Navigation through obstacles such as mine fields is an important capability for autonomous underwater vehicles. One way to produce robust behavior is to perform projective planning. However, real-time performance is a critical requirement in navigation. What is needed for a truly autonomous vehicle are robust reactive rules that perform well in a wide variety of situations, and that also achieve real-time performance. In this work, SAMUEL, a learning system based on genetic algorithms, is used to learn high-performance reactive strategies for navigation and collision avoidance. ",
+ "neighbors": [
+ 91,
+ 523,
+ 529,
+ 553,
+ 554,
+ 642,
+ 824
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 556,
+ "label": 6,
+ "text": "Title: Rigorous Learning Curve Bounds from Statistical Mechanics \nAbstract: In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. ",
+ "neighbors": [
+ 29,
+ 30,
+ 62,
+ 178,
+ 188,
+ 492,
+ 493,
+ 592,
+ 785,
+ 870
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 557,
+ "label": 3,
+ "text": "Title: Decision-Theoretic Foundations for Causal Reasoning \nAbstract: We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.",
+ "neighbors": [
+ 742,
+ 850,
+ 895,
+ 1139
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 558,
+ "label": 3,
+ "text": "Title: Interval Censored Survival Data: A Review of Recent Progress \nAbstract: We review estimation in interval censoring models, including nonparametric estimation of a distribution function and estimation of regression models. In the non-parametric setting, we describe computational procedures and asymptotic properties of the nonparametric maximum likelihood estimators. In the regression setting, we focus on the proportional hazards, the proportional odds and the accelerated failure time semiparametric regression models. Particular emphasis is given to calculation of the Fisher information for the regression parameters. We also discuss computation of the regression parameter estimators via profile likelihood or maximization of the semi-parametric likelihood, distributional results for the maximum likelihood estimators, and estimation of (asymptotic) variances. Some further problems and open questions are also reviewed. ",
+ "neighbors": [
+ 465,
+ 567
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 559,
+ "label": 2,
+ "text": "Title: State Reconstruction for Determining Predictability in Driven Nonlinear Acoustical Systems \nAbstract: Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this paper we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks. ",
+ "neighbors": [
+ 40,
+ 42,
+ 355,
+ 388,
+ 613
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 560,
+ "label": 3,
+ "text": "Title: Space-efficient inference in dynamic probabilistic networks \nAbstract: Dynamic probabilistic networks (DPNs) are a useful tool for modeling complex stochastic processes. The simplest inference task in DPNs is monitoring | that is, computing a posterior distribution for the state variables at each time step given all observations up to that time. Recursive, constant-space algorithms are well-known for monitoring in DPNs and other models. This paper is concerned with hindsight | that is, computing a posterior distribution given both past and future observations. Hindsight is an essential subtask of learning DPN models from data. Existing algorithms for hindsight in DPNs use O(SN ) space and time, where N is the total length of the observation sequence and S is the state space size for each time step. They are therefore impractical for hindsight in complex models with long observation sequences. This paper presents an O(S log N ) space, O(SN log N ) time hindsight algorithm. We demonstrates the effectiveness of the algorithm in two real-world DPN learning problems. We also discuss the possibility of an O(S)-space, O(SN )-time algorithm. ",
+ "neighbors": [
+ 525,
+ 711,
+ 722,
+ 783
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 561,
+ "label": 2,
+ "text": "Title: FLAT MINIMA Neural Computation 9(1):1-42 (1997) \nAbstract: We present a new algorithm for finding low complexity neural networks with high generalization capability. The algorithm searches for a \"flat\" minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to \"simple\" networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a \"good\" weight prior instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms (1) conventional backprop, (2) weight decay, (3) \"optimal brain surgeon\" / \"optimal brain damage\". We also provide pseudo code of the algorithm (omitted from the NC-version). ",
+ "neighbors": [
+ 38,
+ 86,
+ 444,
+ 997
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 562,
+ "label": 2,
+ "text": "Title: Some Topics in Neural Networks and Control \nAbstract: We present a new algorithm for finding low complexity neural networks with high generalization capability. The algorithm searches for a \"flat\" minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to \"simple\" networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a \"good\" weight prior instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms (1) conventional backprop, (2) weight decay, (3) \"optimal brain surgeon\" / \"optimal brain damage\". We also provide pseudo code of the algorithm (omitted from the NC-version). ",
+ "neighbors": [
+ 830
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 563,
+ "label": 1,
+ "text": "Title: Evolutionary Neural Networks for Value Ordering in Constraint Satisfaction Problems \nAbstract: Technical Report AI94-218 May 1994 Abstract A new method for developing good value-ordering strategies in constraint satisfaction search is presented. Using an evolutionary technique called SANE, in which individual neurons evolve to cooperate and form a neural network, problem-specific knowledge can be discovered that results in better value-ordering decisions than those based on problem-general heuristics. A neural network was evolved in a chronological backtrack search to decide the ordering of cars in a resource-limited assembly line. The network required 1/30 of the backtracks of random ordering and 1/3 of the backtracks of the maximization of future options heuristic. The SANE approach should extend well to other domains where heuristic information is either difficult to discover or problem-specific. ",
+ "neighbors": [
+ 91,
+ 140,
+ 529
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 564,
+ "label": 0,
+ "text": "Title: Refining Conversational Case Libraries \nAbstract: Conversational case-based reasoning (CBR) shells (e.g., Inference's CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance.",
+ "neighbors": [
+ 148,
+ 516,
+ 571,
+ 730,
+ 911,
+ 959
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 565,
+ "label": 2,
+ "text": "Title: Shattering all sets of k points in \"general position\" requires (k 1)=2 parameters \nAbstract: For classes of concepts defined by certain classes of analytic functions depending on n parameters, there are nonempty open sets of samples of length 2n + 2 which cannot be shattered. A slighly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed.",
+ "neighbors": [
+ 31,
+ 467,
+ 973
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 566,
+ "label": 0,
+ "text": "Title: Systematic Evaluation of Design Decisions in CBR Systems \nAbstract: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example. ",
+ "neighbors": [
+ 185,
+ 500,
+ 617,
+ 770
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 567,
+ "label": 3,
+ "text": "Title: Efficient Estimation for the Cox Model with Interval Censoring \nAbstract: The maximum likelihood estimator (MLE) for the proportional hazards model with current status data is studied. It is shown that the MLE for the regression parameter is asymptotically normal with p n-convergence rate and achieves the information bound, even though the MLE for the baseline cumulative hazard function only converges at n 1=3 rate. Estimation of the asymptotic variance matrix for the MLE of the regression parameter is also considered. To prove our main results, we also establish a general theorem showing that the MLE of the finite dimensional parameter in a class of semiparametric models is asymptotically efficient even though the MLE of the infinite dimensional parameter converges at a rate slower than The results are illustrated by applying them to a data set from a tumoriginicity study. 1. Introduction In many survival analysis problems, we are interested in the p",
+ "neighbors": [
+ 558
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 568,
+ "label": 1,
+ "text": "Title: Evolving a Team \nAbstract: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice. ",
+ "neighbors": [
+ 234,
+ 552,
+ 664,
+ 691,
+ 692,
+ 693,
+ 1060
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 569,
+ "label": 3,
+ "text": "Title: Reparameterisation Issues in Mixture Modelling and their bearing on MCMC algorithms \nAbstract: There is increasing need for efficient estimation of mixture distributions, especially following the explosion in the use of these as modelling tools in many applied fields. We propose in this paper a Bayesian noninformative approach for the estimation of normal mixtures which relies on a reparameterisation of the secondary components of the mixture in terms of divergence from the main component. As well as providing an intuitively appealing representation at the modelling stage, this reparameterisation has important bearing on both the prior distribution and the performance of MCMC algorithms. We compare two possible reparameterisations extending Mengersen and Robert (1996) and show that the reparameterisation which does not link the secondary components together is associated with poor convergence properties of MCMC algorithms. ",
+ "neighbors": [
+ 90,
+ 578
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 570,
+ "label": 3,
+ "text": "Title: Accounting for Model Uncertainty in Survival Analysis Improves Predictive Performance \nAbstract: Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful.",
+ "neighbors": [
+ 47,
+ 199,
+ 698
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 571,
+ "label": 0,
+ "text": "Title: A Model-Based Approach for Supporting Dialogue Inferencing in a Conversational Case-Based Reasoner \nAbstract: Conversational case-based reasoning (CCBR) is a form of interactive case-based reasoning where users input a partial problem description (in text). The CCBR system responds with a ranked solution display, which lists the solutions of stored cases whose problem descriptions best match the user's, and a ranked question display, which lists the unanswered questions in these cases. Users interact with these displays, either refining their problem description by answering selected questions, or selecting a solution to apply. CCBR systems should support dialogue inferencing; they should infer answers to questions that are implied by the problem description. Otherwise, questions will be listed that the user believes they have already answered. The standard approach to dialogue inferencing allows case library designers to insert rules that define implications between the problem description and unanswered questions. However, this approach imposes substantial knowledge engineering requirements. We introduce an alternative approach whereby an intelligent assistant guides the designer in defining a model of their case library, from which implication rules are derived. We detail this approach, its benefits, and explain how it can be supported through an integration with Parka-DB, a fast relational database system. We will evaluate our approach in the context of our CCBR system, named NaCoDAE. This paper appeared at the 1998 AAAI Spring Symposium on Multimodal Reasoning, and is NCARAI TR AIC-97-023. We introduce an integrated reasoning approach in which a model-based reasoning component performs an important inferencing role in a conversational case-based reasoning (CCBR) system named NaCoDAE (Breslow & Aha, 1997) (Figure 1). CCBR is a form of case-based reasoning where users enter text queries describing a problem and the system assists in eliciting refinements of it (Aha & Breslow, 1997). Cases have three components: ",
+ "neighbors": [
+ 564,
+ 959
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 572,
+ "label": 6,
+ "text": "Title: Learning Conjunctions of Horn Clauses \nAbstract: ",
+ "neighbors": [
+ 392,
+ 461,
+ 754,
+ 1082
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 573,
+ "label": 5,
+ "text": "Title: Applications of a logical discovery engine \nAbstract: The clausal discovery engine claudien is presented. claudien discovers regularities in data and is a representative of the inductive logic programming paradigm. As such, it represents data and regularities by means of first order clausal theories. Because the search space of clausal theories is larger than that of attribute value representation, claudien also accepts as input a declarative specification of the language bias, which determines the set of syntactically well-formed regularities. Whereas other papers on claudien focuss on the semantics or logical problem specification of claudien, on the discovery algorithm, or the PAC-learning aspects, this paper wants to illustrate the power of the resulting technique. In order to achieve this aim, we show how claudien can be used to learn 1) integrity constraints in databases, 2) functional dependencies and determinations, 3) properties of sequences, 4) mixed quantitative and qualitative laws, 5) reverse engineering, and 6) classification rules. ",
+ "neighbors": [
+ 198,
+ 486,
+ 1037,
+ 1247
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 574,
+ "label": 5,
+ "text": "Title: Induction of decision trees using RELIEFF \nAbstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF. ",
+ "neighbors": [
+ 577,
+ 828,
+ 875
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 575,
+ "label": 1,
+ "text": "Title: Induction of decision trees using RELIEFF \nAbstract: An investigation into the dynamics of Genetic Programming applied to chaotic time series prediction is reported. An interesting characteristic of adaptive search techniques is their ability to perform well in many problem domains while failing in others. Because of Genetic Programming's flexible tree structure, any particular problem can be represented in myriad forms. These representations have variegated effects on search performance. Therefore, an aspect of fundamental engineering significance is to find a representation which, when acted upon by Genetic Programming operators, optimizes search performance. We discover, in the case of chaotic time series prediction, that the representation commonly used in this domain does not yield optimal solutions. Instead, we find that the population converges onto one \"accurately replicating\" tree before other trees can be explored. To correct for this premature convergence we make a simple modification to the crossover operator. In this paper we review previous work with GP time series prediction, pointing out an anomalous result related to overlearning, and report the improvement effected by our modified crossover operator. ",
+ "neighbors": [
+ 542,
+ 613,
+ 1145
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 576,
+ "label": 5,
+ "text": "Title: Linear Space Induction in First Order Logic with RELIEFF \nAbstract: Current ILP algorithms typically use variants and extensions of the greedy search. This prevents them to detect significant relationships between the training objects. Instead of myopic impurity functions, we propose the use of the heuristic based on RELIEF for guidance of ILP algorithms. At each step, in our ILP-R system, this heuristic is used to determine a beam of candidate literals. The beam is then used in an exhaustive search for a potentially good conjunction of literals. From the efficiency point of view we introduce interesting declarative bias which enables us to keep the growth of the training set, when introducing new variables, within linear bounds (linear with respect to the clause length). This bias prohibits cross-referencing of variables in variable dependency tree. The resulting system has been tested on various artificial problems. The advantages and deficiencies of our approach are discussed. ",
+ "neighbors": [
+ 509,
+ 577,
+ 604,
+ 665,
+ 875,
+ 882,
+ 921
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 577,
+ "label": 5,
+ "text": "Title: Discretization of continuous attributes using ReliefF \nAbstract: Many existing learning algorithms expect the attributes to be discrete. Discretization of continuous attributes might be difficult task even for domain experts. We have tried the non-myopic heuristic measure ReliefF for discretization and compared it with well known dissimilarity measure and discretizations by experts. An extensive testing with several learning algorithms on six real world databases has shown that none of the discretizations has clear advantage over the others. ",
+ "neighbors": [
+ 574,
+ 576,
+ 875
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 578,
+ "label": 3,
+ "text": "Title: Bayesian curve fitting using multivariate normal mixtures \nAbstract: Problems of regression smoothing and curve fitting are addressed via predictive inference in a flexible class of mixture models. Multi-dimensional density estimation using Dirichlet mixture models provides the theoretical basis for semi-parametric regression methods in which fitted regression functions may be deduced as means of conditional predictive distributions. These Bayesian regression functions have features similar to generalised kernel regression estimates, but the formal analysis addresses problems of multivariate smoothing parameter estimation and the assessment of uncertainties about regression functions naturally. Computations are based on multi-dimensional versions of existing Markov chain simulation analysis of univariate Dirichlet mixture models. ",
+ "neighbors": [
+ 495,
+ 569,
+ 750
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 579,
+ "label": 1,
+ "text": "Title: An Analysis of the Interacting Roles of Population Size and Crossover in Genetic Algorithms \nAbstract: In this paper we present some theoretical and empirical results on the interacting roles of population size and crossover in genetic algorithms. We summarize recent theoretical results on the disruptive effect of two forms of multi-point crossover: n-point crossover and uniform crossover. We then show empirically that disruption analysis alone is not sufficient for selecting appropriate forms of crossover. However, by taking into account the interacting effects of population size and crossover, a general picture begins to emerge. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested. ",
+ "neighbors": [
+ 419,
+ 420,
+ 499,
+ 545,
+ 611,
+ 629,
+ 676,
+ 732,
+ 816
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 580,
+ "label": 2,
+ "text": "Title: Using Neural Networks for Descriptive Statistical Analysis of Educational Data \nAbstract: In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification procedure in discrete domains. We will address both the classification and interpretive aspects of discriminant analysis, and compare the approach to the traditional method of linear discrimin- ants as implemented in standard statistical packages. We show that the MDN approach adopted performs well in both aspects. Many of the observations made are not restricted to the particular case at hand, and are applicable to most applications of discriminant analysis in educational research. fl URL: http://www.cs.Helsinki.FI/research/cosco/ ",
+ "neighbors": [
+ 40,
+ 86,
+ 879
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 581,
+ "label": 1,
+ "text": "Title: Simulated Annealing for Hard Satisfiability Problems \nAbstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993). ",
+ "neighbors": [
+ 643,
+ 645
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 582,
+ "label": 6,
+ "text": "Title: Bibliography \"SMART: Support Management Automated Reasoning Technology for COMPAQ Customer Service,\" \"Instance-Based Learning Algorithms,\" Machine\nAbstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993). ",
+ "neighbors": [
+ 496,
+ 526,
+ 724
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 583,
+ "label": 3,
+ "text": "Title: Error-Based and Entropy-Based Discretization of Continuous Features \nAbstract: We present a comparison of error-based and entropy-based methods for discretization of continuous features. Our study includes both an extensive empirical comparison as well as an analysis of scenarios where error minimization may be an inappropriate discretization criterion. We present a discretization method based on the C4.5 decision tree algorithm and compare it to an existing entropy-based discretization algorithm, which employs the Minimum Description Length Principle, and a recently proposed error-based technique. We evaluate these discretization methods with respect to C4.5 and Naive-Bayesian classifiers on datasets from the UCI repository and analyze the computational complexity of each method. Our results indicate that the entropy-based MDL heuristic outperforms error minimization on average. We then analyze the shortcomings of error-based approaches in comparison to entropy-based methods. ",
+ "neighbors": [
+ 242,
+ 743,
+ 744,
+ 749,
+ 1302
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 584,
+ "label": 2,
+ "text": "Title: Lemma 2.3 The system is reachable and observable and realizes the same input/output behavior as\nAbstract: Here we show a similar construction for multiple-output systems, with some modifications. Let = (A; B; C) s be a discrete-time sign-linear system with state space IR n and p outputs. Perform a change of ; where A 1 (n 1 fi n 1 ) is invertible and A 2 (n 2 fi n 2 ) is nilpotent. If (A; B) is a reachable pair and (A; C) is an observable pair, then is minimal in the sense that any other sign-linear system with the same input/output behavior has dimension at least n. But, if n 1 < n, then det A = 0 and is not observable and hence not canonical. Let us find another system ~ (necessarily not sign-linear) which has the same input/output behavior as , but is canonical. Let i be the relative degree of the ith row of the Markov sequence A, and = minf i : i = 1; : : : ; pg. Let the initial state be x. There is a difference between the case when the smallest relative degree is greater or equal to n 2 and the case when < n 2 . Roughly speaking, when n 2 the outputs of the sign-linear system give us information about sign (Cx), sign (CAx), : : : , sign (CA 1 x), which are the first outputs of the sys tem. After that, we can use the inputs and outputs to learn only about x 1 (the first n 1 components of x). When < n 2 , we may be able to use some controls to learn more about x 2 (the last n 2 components of x) before time n 2 when the nilpotency of A 2 has finally Lemma 2.4 Two states x and z are indistinguishable for if and only if (x) = (z). Proof. In the case n 2 , we have only the equations x 1 = z 1 and the equality of the 's. The first ` output terms for are exactly the terms of . So these equalities are satisfied if and only if the first ` output terms coincide for x and z, for any input. Equality of everything but the first n 1 components is equivalent to the first n 2 output terms coinciding for x and z, since the jth row of the qth output, for initial state x, for example, is either sign (c j A q x) if j > q, or sign (c j A q x + + A j j u q j +1 + ) if j q in which case we may use the control u q j +1 to identify c j A q x (using Remark 3.3 in [1]). ",
+ "neighbors": [
+ 815
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 585,
+ "label": 6,
+ "text": "Title: On learning hierarchical classifications \nAbstract: Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning category-discriminants in a hard top-down fashion and compare this to a soft approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the prior knowledge it encodes. ",
+ "neighbors": [
+ 40,
+ 601,
+ 747,
+ 1208
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 586,
+ "label": 6,
+ "text": "Title: Machine Learning 27(1):51-68, 1997. Predicting nearly as well as the best pruning of a decision tree \nAbstract: Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up \"over-fitting\" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be \"much worse\" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [4] results on predicting using \"expert advice\" (where we view each pruning as an \"expert\") to obtain an algorithm that has provably low prediction loss, but that is com-putationally infeasible. Next, we generalize and apply a method developed by Buntine [3], [2] and Willems, Shtarkov and Tjalkens [20], [21] to derive a very efficient implementation of this procedure. ",
+ "neighbors": [
+ 255,
+ 330,
+ 508,
+ 724,
+ 804,
+ 946
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 587,
+ "label": 6,
+ "text": "Title: Pessimistic decision tree pruning based on tree size \nAbstract: In this work we develop a new criteria to perform pessimistic decision tree pruning. Our method is theoretically sound and is based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. We show that our criteria is very well motivated, from the theory side, and performs very well in practice. The accuracy of the new criteria is comparable to that of the current method used in C4.5.",
+ "neighbors": [
+ 29,
+ 188,
+ 217,
+ 374,
+ 748
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 588,
+ "label": 2,
+ "text": "Title: NETWORKS, FUNCTION DETERMINES FORM \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
+ "neighbors": [
+ 116,
+ 596,
+ 798,
+ 901
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 589,
+ "label": 3,
+ "text": "Title: Subregion-Adaptive Integration of Functions Having a Dominant Peak \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
+ "neighbors": [
+ 1257
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 590,
+ "label": 1,
+ "text": "Title: Solving Combinatorial Problems Using Evolutionary Algorithms \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
+ "neighbors": [
+ 91,
+ 643,
+ 984
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 591,
+ "label": 2,
+ "text": "Title: Protein Secondary Structure Modelling with Probabilistic Networks (Extended Abstract) \nAbstract: In this paper we study the performance of probabilistic networks in the context of protein sequence analysis in molecular biology. Specifically, we report the results of our initial experiments applying this framework to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we describe here is our ability to perform detailed experiments where we can experiment with different models. We can easily perform local substitutions (mutations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. We believe that probabilistic methods are comparable to other methods in prediction quality. In addition, the predictions generated by our methods have precise quantitative semantics which is not shared by other classification methods. Specifically, all the causal and statistical independence assumptions are made explicit in our networks thereby allowing biologists to study and experiment with different causal models in a convenient manner. ",
+ "neighbors": [
+ 150,
+ 743
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 592,
+ "label": 6,
+ "text": "Title: Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation \nAbstract: In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity-check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for rather limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability, and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class. ",
+ "neighbors": [
+ 346,
+ 492,
+ 493,
+ 556,
+ 747
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 593,
+ "label": 0,
+ "text": "Title: Observation and Generalisation in a Simulated Robot World \nAbstract: This paper describes a program which observes the behaviour of actors in a simulated world and uses these observations as guides to conducting experiments. An experiment is a sequence of actions carried out by an actor in order to support or weaken the case for a generalisation of a concept. A generalisation is attempted when the program observes a state of the world which is similar to a some previous state. A partial matching algorithm is used to find substitutions which enable the two states to be unified. The generalisation of the two states is their unifier. ",
+ "neighbors": [
+ 661
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 594,
+ "label": 1,
+ "text": "Title: Adaptive Behavior in Competing Co-Evolving Species \nAbstract: Co-evolution of competitive species provides an interesting testbed to study the role of adaptive behavior because it provides unpredictable and dynamic environments. In this paper we experimentally investigate some arguments for the co-evolution of different adaptive protean behaviors in competing species of predators and preys. Both species are implemented as simulated mobile robots (Kheperas) with infrared proximity sensors, but the predator has an additional vision module whereas the prey has a maximum speed set to twice that of the predator. Different types of variability during life for neurocontrollers with the same architecture and genetic length are compared. It is shown that simple forms of pro-teanism affect co-evolutionary dynamics and that preys rather exploit noisy controllers to generate random trajectories, whereas predators benefit from directional-change controllers to improve pursuit behavior.",
+ "neighbors": [
+ 308,
+ 413
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 595,
+ "label": 2,
+ "text": "Title: The Potential of Prototype Styles of Generalization \nAbstract: There are many ways for a learning system to generalize from training set data. This paper presents several generalization styles using prototypes in an attempt to provide accurate generalization on training set data for a wide variety of applications. These generalization styles are efficient in terms of time and space, and lend themselves well to massively parallel architectures. Empirical results of generalizing on several real-world applications are given, and these results indicate that the prototype styles of generalization presented have potential to provide accurate generalization for many applications. ",
+ "neighbors": [
+ 738
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 596,
+ "label": 2,
+ "text": "Title: Recurrent Neural Networks: Some Systems-Theoretic Aspects \nAbstract: This paper provides an exposition of some recent research regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, ob-servability, and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned. fl Supported in part by US Air Force Grant AFOSR-94-0293",
+ "neighbors": [
+ 116,
+ 588,
+ 597,
+ 798
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 597,
+ "label": 2,
+ "text": "Title: Complete Controllability of Continuous-Time Recurrent Neural Networks \nAbstract: This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent.",
+ "neighbors": [
+ 596,
+ 798
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 598,
+ "label": 0,
+ "text": "Title: Model-Based Learning of Structural Indices to Design Cases \nAbstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization. ",
+ "neighbors": [
+ 755
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 599,
+ "label": 0,
+ "text": "Title: GIT-CC-92/60 A Model-Based Approach to Analogical Reasoning and Learning in Design \nAbstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization. ",
+ "neighbors": [
+ 352,
+ 755,
+ 758,
+ 761,
+ 762
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 600,
+ "label": 2,
+ "text": "Title: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory \nAbstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information. ",
+ "neighbors": [
+ 608
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 601,
+ "label": 6,
+ "text": "Title: Bias Plus Variance Decomposition for Zero-One Loss Functions \nAbstract: We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository.",
+ "neighbors": [
+ 496,
+ 540,
+ 585,
+ 671,
+ 899
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 602,
+ "label": 2,
+ "text": "Title: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks\nAbstract: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists. ",
+ "neighbors": [
+ 367,
+ 765,
+ 826
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 603,
+ "label": 1,
+ "text": "Title: An Overview of Genetic Algorithms Part 1, Fundamentals \nAbstract: Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.",
+ "neighbors": [
+ 91,
+ 134,
+ 554,
+ 643,
+ 847,
+ 1089
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 604,
+ "label": 5,
+ "text": "Title: Stochastic search in inductive concept learning \nAbstract: Concept learning can be viewed as search of the space of concept descriptions. The hypothesis language determines the search space. In standard inductive learning algorithms, the structure of the search space is determined by generalization/specialization operators. Algorithms perform locally optimal search by using a hill-climbing and/or a beam-search strategy. To overcome this limitation, concept learning can be viewed as stochastic search of the space of concept descriptions. The proposed stochastic search method is based on simulated annealing which is known as a successful means for solving combinatorial optimization problems. The stochastic search method, implemented in a rule learning system ATRIS, is based on a compact and efficient representation of the problem and the appropriate operators for structuring the search space. Furthermore, by heuristic pruning of the search space, the method enables also handling of imperfect data. The paper introduces the stochastic search method, describes the ATRIS learning algorithm and gives results of the experiments. ",
+ "neighbors": [
+ 217,
+ 239,
+ 342,
+ 576,
+ 701,
+ 882,
+ 921
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 605,
+ "label": 1,
+ "text": "Title: An Analysis of the Effects of Neighborhood Size and Shape on Local Selection Algorithms \nAbstract: The increasing availability of finely-grained parallel architectures has resulted in a variety of evolutionary algorithms (EAs) in which the population is spatially distributed and local selection algorithms operate in parallel on small, overlapping neighborhoods. The effects of design choices regarding the particular type of local selection algorithm as well as the size and shape of the neighborhood are not particularly well understood and are generally tested empirically. In this paper we extend the techniques used to more formally analyze selection methods for sequential EAs and apply them to local neighborhood models, resulting in a much clearer understanding of the effects of neighborhood size and shape.",
+ "neighbors": [
+ 607,
+ 643,
+ 652
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 606,
+ "label": 3,
+ "text": "Title: Incremental Tradeoff Resolution in Qualitative Probabilistic Networks \nAbstract: Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods. ",
+ "neighbors": [
+ 192,
+ 364,
+ 551,
+ 1046
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 607,
+ "label": 1,
+ "text": "Title: A Survey of Parallel Genetic Algorithms \nAbstract: IlliGAL Report No. 97003 May 1997 ",
+ "neighbors": [
+ 91,
+ 605,
+ 627,
+ 652,
+ 732
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 608,
+ "label": 2,
+ "text": "Title: The Rectified Gaussian Distribution \nAbstract: A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds.",
+ "neighbors": [
+ 19,
+ 600
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 609,
+ "label": 2,
+ "text": "Title: A Fast Fixed-Point Algorithm for Independent Component Analysis \nAbstract: This paper will appear in Neural Computation, 9:1483-1492, 1997. Abstract We introduce a novel fast algorithm for Independent Component Analysis, which can be used for blind source separation and feature extraction. It is shown how a neural network learning rule can be transformed into a txed-point iteration, which provides an algorithm that is very simple, does not depend on any user-detned parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm tnds, one at a time, all non-Gaussian independent components, regardless of their probability distributions. The computations can be performed either in batch mode or in a semi-adaptive manner. The convergence of the algorithm is rigorously proven, and the convergence speed is shown to be cubic. Some comparisons to gradient based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.",
+ "neighbors": [
+ 331,
+ 335,
+ 483,
+ 487,
+ 992
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 610,
+ "label": 1,
+ "text": "Title: Extended Selection Mechanisms in Genetic Algorithms \nAbstract: ",
+ "neighbors": [
+ 91,
+ 237,
+ 462,
+ 807,
+ 937
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 611,
+ "label": 1,
+ "text": "Title: Extended Selection Mechanisms in Genetic Algorithms \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
+ "neighbors": [
+ 91,
+ 462,
+ 579,
+ 652
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 612,
+ "label": 5,
+ "text": "Title: An adaptation of Relief for attribute estimation in regression \nAbstract: Heuristic measures for estimating the quality of attributes mostly assume the independence of attributes so in domains with strong dependencies between attributes their performance is poor. Relief and its extension ReliefF are capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view. We present the analysis of Reli-efF which lead us to its adaptation to regression (continuous class) problems. The experiments on artificial and real-world data sets show that Re-gressional ReliefF correctly estimates the quality of attributes in various conditions, and can be used for non-myopic learning of the regression trees. Regressional ReliefF and ReliefF provide a unified view on estimating the attribute quality in regression and classification.",
+ "neighbors": [
+ 183,
+ 631,
+ 665,
+ 875,
+ 911
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 613,
+ "label": 2,
+ "text": "Title: Nonlinear Prediction of Chaotic Time Series Using Support Vector Machines \nAbstract: A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM.",
+ "neighbors": [
+ 357,
+ 559,
+ 575,
+ 625,
+ 736,
+ 782,
+ 951
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 614,
+ "label": 2,
+ "text": "Title: A Multi-Chip Module Implementation of a Neural Network \nAbstract: The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models. ",
+ "neighbors": [
+ 470,
+ 472,
+ 473,
+ 641,
+ 738
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 615,
+ "label": 5,
+ "text": "Title: Specialization of Recursive Predicates \nAbstract: When specializing a recursive predicate in order to exclude a set of negative examples without excluding a set of positive examples, it may not be possible to specialize or remove any of the clauses in a refutation of a negative example without excluding any positive exam ples. A previously proposed solution to this problem is to apply program transformation in order to obtain non-recursive target predicates from recursive ones. However, the application of this method prevents recursive specializations from being found. In this work, we present the algorithm spectre ii which is not limited to specializing non-recursive predicates. The key idea upon which the algorithm is based is that it is not enough to specialize or remove clauses in refutations of negative examples in order to obtain correct specializations, but it is sometimes necessary to specialize clauses that appear only in refutations of positive examples. In contrast to its predecessor spectre, the new algorithm is not limited to specializing clauses defining one predicate only, but may specialize clauses defining multiple predicates. Furthermore, the positive and negative examples are no longer required to be instances of the same predicate. It is proven that the algorithm produces a correct specialization when all positive examples are logical consequences of the original program, there is a finite number of derivations of positive and negative examples and when no positive and negative examples have the same sequence of input clauses in their refutations.",
+ "neighbors": [
+ 299,
+ 616,
+ 708
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 616,
+ "label": 5,
+ "text": "Title: Specialization of Logic Programs by Pruning SLD-Trees \nAbstract: program w.r.t. positive and negative examples can be viewed as the problem of pruning an SLD-tree such that all refutations of negative examples and no refutations of positive examples are excluded. It is shown that the actual pruning can be performed by applying unfolding and clause removal. The algorithm spectre is presented, which is based on this idea. The input to the algorithm is, besides a logic program and positive and negative examples, a computation rule, which determines the shape of the SLD-tree that is to be pruned. It is shown that the generality of the resulting specialization is dependent on the computation rule, and experimental results are presented from using three different computation rules. The experiments indicate that the computation rule should be formulated so that the number of applications of unfolding is kept as low as possible. The algorithm, which uses a divide-and-conquer method, is also compared with a covering algorithm. The experiments show that a higher predictive accuracy can be achieved if the focus is on discriminating positive from negative examples rather than on achieving a high coverage of positive examples only. ",
+ "neighbors": [
+ 299,
+ 615,
+ 708,
+ 1199
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 617,
+ "label": 0,
+ "text": "Title: Continuous Case-Based Reasoning \nAbstract: Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as online sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task. This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research. ",
+ "neighbors": [
+ 500,
+ 566,
+ 1088
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 618,
+ "label": 3,
+ "text": "Title: An Algorithm for the Construction of Bayesian Network Structures from Data \nAbstract: Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algo rithm performance issues and open problems.",
+ "neighbors": [
+ 698,
+ 850,
+ 863,
+ 884,
+ 914
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 619,
+ "label": 6,
+ "text": "Title: Pruning Adaptive Boosting ICML-97 Final Draft \nAbstract: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the \"weak\" algorithm to be \"boosted.\" Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost.",
+ "neighbors": [
+ 330,
+ 826
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 620,
+ "label": 2,
+ "text": "Title: Plasticity in cortical neuron properties: Modeling the effects of an NMDA antagonist and a GABA\nAbstract: Infusion of a GABA agonist (Reiter & Stryker, 1988) and infusion of an NMDA receptor antagonist (Bear et al., 1990), in the primary visual cortex of kittens during monocular deprivation, shifts ocular dominance toward the closed eye, in the cortical region near the infusion site. This reverse ocular dominance shift has been previously modeled by variants of a covariance synaptic plasticity rule (Bear et al., 1990; Clothiaux et al., 1991; Miller et al., 1989; Reiter & Stryker, 1988). Kasamatsu et al. (1997, 1998) showed that infusion of an NMDA receptor antagonist in adult cat primary visual cortex changes ocular dominance distribution, reduces binocularity, and reduces orientation and direction selectivity. This paper presents a novel account of the effects of these pharmacological treatments, based on the EXIN synaptic plasticity rules (Marshall, 1995), which include both an instar afferent excitatory and an outstar lateral inhibitory rule. Functionally, the EXIN plasticity rules enhance the efficiency, discrimination, and context-sensitivity of a neural network's representation of perceptual patterns (Marshall, 1995; Marshall & Gupta, 1998). The EXIN model decreases lateral inhibition from neurons outside the infusion site (control regions) to neurons inside the infusion region, during monocular deprivation. In the model, plasticity in afferent pathways to neurons affected by the pharmacological treatments is assumed to be blocked , as opposed to previous models (Bear et al., 1990; Miller et al., 1989; Reiter & Stryker, 1988), in which afferent pathways from the open eye to neurons in the infusion region are weakened . The proposed model is consistent with results suggesting that long-term plasticity can be blocked by NMDA antagonists or by postsynaptic hyperpolarization (Bear et al., 1990; Dudek & Bear, 1992; Goda & Stevens, 1996; Kirkwood et al., 1993). Since the role of plasticity in lateral inhibitory pathways in producing cortical plasticity has not received much attention, several predictions are made based on the EXIN lateral inhibitory plasticity rule. ",
+ "neighbors": [
+ 203,
+ 926,
+ 1167
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 621,
+ "label": 6,
+ "text": "Title: Learning Unions of Boxes with Membership and Equivalence Queries \nAbstract: We present two algorithms that use membership and equivalence queries to exactly identify the concepts given by the union of s discretized axis-parallel boxes in d-dimensional discretized Euclidean space where each coordinate can have n discrete values. The first algorithm receives at most sd counterexamples and uses time and membership queries polynomial in s and log n for d any constant. Further, all equivalence queries made can be formulated as the union of O(sd log s) axis-parallel boxes. Next, we introduce a new complexity measure that better captures the complexity of a union of boxes than simply the number of boxes and dimensions. Our new measure, , is the number of segments in the target polyhedron where a segment is a maximum portion of one of the sides of the polyhedron that lies entirely inside or entirely outside each of the other halfspaces defining the polyhedron. We then present an improvement of our first algorithm that uses time and queries polynomial in and log n. The hypothesis class used here is decision trees of height at most 2sd. Further we can show that the time and queries used by this algorithm are polynomial in d and log n for s any constant thus generalizing the exact learnability of DNF formulas with a constant number of terms. In fact, this single algorithm is efficient for either s or d constant. ",
+ "neighbors": [
+ 808
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 622,
+ "label": 1,
+ "text": "Title: Scheduling Maintenance of Electrical Power Transmission Networks Using Genetic Programming \nAbstract: Previous work showed the combination of a Genetic Algorithm using an order or permutation chromosome combined with hand coded \"Greedy\" Optimizers can readily produce an optimal schedule for a four node test problem [ Langdon, 1995 ] . Following this the same GA has been used to find low cost schedules for the South Wales region of the UK high voltage power network. This paper describes the evolution of the best known schedule for the base South Wales problem using Genetic Programming starting from the hand coded heuris tics used with the GA.",
+ "neighbors": [
+ 91,
+ 197,
+ 732,
+ 1034
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 623,
+ "label": 2,
+ "text": "Title: Observability of Linear Systems with Saturated Outputs \nAbstract: In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured.",
+ "neighbors": [
+ 815
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 624,
+ "label": 5,
+ "text": "Title: Automated Refinement of First-Order Horn-Clause Domain Theories \nAbstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. ",
+ "neighbors": [
+ 72,
+ 661,
+ 771,
+ 790,
+ 823
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 625,
+ "label": 2,
+ "text": "Reference: [39] Yoda, M. (1994).
Predicting the Tokyo stock market. In Deboeck, G.J. (Ed.) (1994). Trading on the Edge. New York: Wiley., 66-79. VITA Graduate School Southern Illinois University Daniel Nikolaev Nikovski Date of Birth: April 13, 1969 606 West College Street, Apt.4, Rm. 6, Carbondale, Illinois 62901 150 Hristo Botev Boulevard, Apt. 54, 4004 Plovdiv, Bulgaria Technical University - Sofia, Bulgaria Engineer of Computer Systems and Control Thesis Title: Adaptive Computation Techniques for Time Series Analysis Major Professor: Dr. Mehdi Zargham \nAbstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. ",
+ "neighbors": [
+ 40,
+ 240,
+ 357,
+ 613
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 626,
+ "label": 6,
+ "text": "Title: PAC Learning Intersections of Halfspaces with Membership Queries (Extended Abstract) \nAbstract: ",
+ "neighbors": [
+ 346,
+ 1216
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 627,
+ "label": 1,
+ "text": "Title: Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem \nAbstract: Genetic algorithms are one example of the use of a random element within an algorithm for combinatorial optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our test-bed problems is discussed. We carry out extensive computational testing to find appropriate values for the various parameters associated with this genetic algorithm. These experiments underscore the importance of the correct choice of a scaling parameter and mutation rate to ensure the good performance of a genetic algorithm. We also describe a parallel implementation of the genetic algorithm and give some comparisons between the parallel and serial implementations. Both versions of the algorithm are shown to be effective in producing good solutions for problems of this type (with appropriately chosen parameters). ",
+ "neighbors": [
+ 91,
+ 607,
+ 732
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 628,
+ "label": 3,
+ "text": "Title: Confidence as Higher Order Uncertainty proposed for handling higher order uncertainty, including the Bayesian approach,\nAbstract: ",
+ "neighbors": [
+ 836,
+ 837,
+ 838,
+ 839
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 629,
+ "label": 1,
+ "text": "Title: On The State of Evolutionary Computation \nAbstract: In the past few years the evolutionary computation landscape has been rapidly changing as a result of increased levels of interaction between various research groups and the injection of new ideas which challenge old tenets. The effect has been simultaneously exciting, invigorating, annoying, and bewildering to the old-timers as well as the new-comers to the field. Emerging out of all of this activity are the beginnings of some structure, some common themes, and some agreement on important open issues. We attempt to summarize these emergent properties in this paper. ",
+ "neighbors": [
+ 91,
+ 462,
+ 579,
+ 955
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 630,
+ "label": 0,
+ "text": "Title: Towards a Better Understanding of Memory-Based Reasoning Systems \nAbstract: We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever ",
+ "neighbors": [
+ 150,
+ 743,
+ 751,
+ 1191
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 631,
+ "label": 2,
+ "text": "Title: Flexible Metric Nearest Neighbor Classiflcation \nAbstract: The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled \"training\" objects that are closest to it. Closeness is usually deflned in terms of a metric distance on the Euclidean space with the input measurement variables as axes. The metric chosen to deflne this distance can strongly efiect performance. An optimal choice depends on the problem at hand as characterized by the respective class distributions on the input measurement space, and within a given problem, on the location of the unknown object in that space. In this paper new types of K-nearest-neighbor procedures are described that estimate the local relevance of each input variable, or their linear combinations, for each individual point to be classifled. This information is then used to separately customize the metric used to deflne distance from that object in flnding its nearest neighbors. These procedures are a hybrid between regular K-nearest-neighbor methods and treestructured recursive partitioning techniques popular in statistics and machine learning.",
+ "neighbors": [
+ 57,
+ 417,
+ 612,
+ 905
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 632,
+ "label": 1,
+ "text": "Title: Using Genetic Algorithms to Explore Pattern Recognition in the Immune System COMMENTS WELCOME \nAbstract: This paper describes an immune system model based on binary strings. The purpose of the model is to study the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of the model. The paper reports simulation experiments on two pattern recognition problems that are relevant to natural immune systems. Finally, it reviews the relation between the model and explicit fitness sharing techniques for genetic algorithms, showing that the immune system model implements a form of implicit fitness sharing. ",
+ "neighbors": [
+ 91,
+ 351,
+ 634,
+ 709,
+ 887,
+ 941
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 633,
+ "label": 2,
+ "text": "Title: Equivalence of Linear Boltzmann Chains and Hidden Markov Models sequence L, is: where Z(; A;\nAbstract: Several authors have made a link between hidden Markov models for time series and energy-based models (Luttrell 1989, Williams 1990, Saul and Jordan 1995). Saul and Jordan (1995) discuss a linear Boltzmann chain model with state-state transition energies A ii 0 (going from state i to state i 0 ) and symbol emission energies B ij , under which the probability of an entire state fi l ; j l g L Whilst any HMM can be written as a linear Boltzmann chain by setting exp(A ii 0 ) = a ii 0 , exp(B ij ) = b ij and exp( i ) = i , not all linear Boltzmann chains can be represented as HMMs (Saul and Jordan 1995). However, the difference between the two models is minimal. To be precise, if the final hidden ",
+ "neighbors": [
+ 16,
+ 357,
+ 411,
+ 424,
+ 891
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 634,
+ "label": 1,
+ "text": "Title: A Coevolutionary Approach to Learning Sequential Decision Rules \nAbstract: We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions.",
+ "neighbors": [
+ 140,
+ 324,
+ 529,
+ 632,
+ 688,
+ 709,
+ 887,
+ 1107
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 635,
+ "label": 2,
+ "text": "Title: Adaptive Parameter Pruning in Neural Networks \nAbstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization. An open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This paper presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. The results of extensive experimentation indicate that lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. Results of statistical significance tests comparing autoprune to the new method lprune as well as to backpropagation with early stopping are given for 14 different problems. ",
+ "neighbors": [
+ 510,
+ 674,
+ 1239
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 636,
+ "label": 0,
+ "text": "Title: A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems \nAbstract: The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1",
+ "neighbors": [
+ 348,
+ 416,
+ 463,
+ 672,
+ 854
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 637,
+ "label": 0,
+ "text": "Title: MAC/FAC: A Model of Similarity-based Retrieval \nAbstract: We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for \"many are called but few are chosen\") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data. ",
+ "neighbors": [
+ 41,
+ 309,
+ 311,
+ 662,
+ 669,
+ 761,
+ 825,
+ 932,
+ 935
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 638,
+ "label": 0,
+ "text": "Title: Constructive Similarity Assessment: Using Stored Cases to Define New Situations \nAbstract: A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description. Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. ",
+ "neighbors": [
+ 474,
+ 475,
+ 825
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 639,
+ "label": 0,
+ "text": "Title: Towards A Computer Model of Memory Search Strategy Learning \nAbstract: Much recent research on modeling memory processes has focused on identifying useful indices and retrieval strategies to support particular memory tasks. Another important question concerning memory processes, however, is how retrieval criteria are learned. This paper examines the issues involved in modeling the learning of memory search strategies. It discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. It discusses an implementation of that model, and, based on the lessons learned from that implementation, points towards issues and directions in refining the model. ",
+ "neighbors": [
+ 337,
+ 340,
+ 536,
+ 679,
+ 833,
+ 1225,
+ 1270
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 640,
+ "label": 1,
+ "text": "Title: Recombination Operator, its Correlation to the Fitness Landscape and Search Performance \nAbstract: The author reserves all other publication and other rights in association with the copyright in the thesis, and except as hereinbefore provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatever without the author's prior written permission. ",
+ "neighbors": [
+ 91,
+ 420,
+ 462,
+ 543,
+ 652
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 641,
+ "label": 2,
+ "text": "Title: A Self-Organizing Binary Decision Tree For Incrementally Defined Rule Based \nAbstract: This paper presents an ASOCS (adaptive self-organizing concurrent system) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on adaptive algorithm 3 (AA3) and details its architecture and learning algorithm. It has advantages over previous ASOCS models in simplicity, implementability, and cost. An ASOCS can operate in either a data processing mode or a learning mode. During the data processing mode, an ASOCS acts as a parallel hardware circuit. In learning mode, rules expressed as boolean conjunctions are incrementally presented to the ASOCS. All ASOCS learning algorithms incorporate a new rule in a distributed fashion in a short, bounded time. ",
+ "neighbors": [
+ 12,
+ 470,
+ 473,
+ 614,
+ 670,
+ 685
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 642,
+ "label": 1,
+ "text": "Title: ADAPTIVE TESTING OF CONTROLLERS FOR AUTONOMOUS VEHICLES \nAbstract: Autonomous vehicles are likely to require sophisticated software controllers to maintain vehicle performance in the presence of vehicle faults. The test and evaluation of complex software controllers is expected to be a challenging task. The goal of this e ffort is to apply machine learning techniques from the field of arti ficial intelligence to the general problem of evaluating an intelligent controller for an autonomous vehicle. The approach involves subjecting a controller to an adaptively chosen set of fault scenarios within a vehicle simulator, and searching for combinations of faults that produce noteworthy performance by the vehicle controller. The search employs a genetic algorithm. We illustrate the approach by evaluating the performance of a subsumption-based controller for an autonomous vehicle. The preliminary evidence suggests that this approach is an e ffective alternative to manual testing of sophisticated software controllers. ",
+ "neighbors": [
+ 529,
+ 555,
+ 704
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 643,
+ "label": 1,
+ "text": "Title: Using Neural Networks and Genetic Algorithms as Heuristics for NP-Complete Problems \nAbstract: Paradigms for using neural networks (NNs) and genetic algorithms (GAs) to heuristically solve boolean satisfiability (SAT) problems are presented. Since SAT is NP-Complete, any other NP-Complete problem can be transformed into an equivalent SAT problem in polynomial time, and solved via either paradigm. This technique is illustrated for hamiltonian circuit (HC) problems. ",
+ "neighbors": [
+ 91,
+ 419,
+ 581,
+ 590,
+ 603,
+ 605,
+ 745,
+ 844,
+ 847,
+ 880
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 644,
+ "label": 0,
+ "text": "Title: Learning Generic Mechanisms from Experiences for Analogical Reasoning \nAbstract: Humans appear to often solve problems in a new domain by transferring their expertise from a more familiar domain. However, making such cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions used by designers. However, one important yet unexplored issue is where these generic mechanisms come from. We hypothesize that they are acquired incrementally from problem-solving experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we show that mental models in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we show how the model-based learning method integrated with similarity-based learning addresses the issues in generalization from experiences. ",
+ "neighbors": [
+ 468,
+ 893
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 645,
+ "label": 1,
+ "text": "Title: Optimal Mutation Rates in Genetic Search \nAbstract: The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism.",
+ "neighbors": [
+ 91,
+ 462,
+ 581,
+ 877
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 646,
+ "label": 1,
+ "text": "Title: Neural Networks in an Artificial Life Perspective \nAbstract: In the last few years several researchers within the Artificial Life and Mobile Robotics community used Artificial Neural Networks. Explicitly viewing Neural Networks in an Artificial Life perspective has a number of consequences that make research on what we will call Artificial Life Neural Networks ( ALNNs) rather different from traditional connectionist research. The aim of the paper is to make the differences between ALNNs and \"classical\" neural networks explicit.",
+ "neighbors": [
+ 786,
+ 1138,
+ 1249
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 647,
+ "label": 2,
+ "text": "Title: A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses \nAbstract: Recent interest has come about in deriving various neural network architectures for modelling time-dependent signals. A number of algorithms have been published for multilayer perceptrons with synapses described by finite impulse response (FIR) and infinite impulse response (IIR) filters (the latter case is also known as Locally Recurrent Globally Feedforward Networks). The derivations of these algorithms have used different approaches in calculating the gradients, and in this note, we present a short, but unifying account of how these different algorithms compare for the FIR case, both in derivation, and performance. New algorithms are subsequently presented. Simulation results have been performed to benchmark these algorithms. In this note, results are compared for the Mackey-Glass chaotic time series against a number of other methods including a standard multilayer perceptron, and a local approximation method. ",
+ "neighbors": [
+ 739
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 648,
+ "label": 3,
+ "text": "Title: Decomposable graphical Gaussian model determination \nAbstract: We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets. ",
+ "neighbors": [
+ 90,
+ 448,
+ 698,
+ 757
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 649,
+ "label": 0,
+ "text": "Title: Opportunistic Reasoning: A Design Perspective \nAbstract: An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal match. We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system. ",
+ "neighbors": [
+ 15,
+ 278,
+ 762,
+ 854,
+ 893
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 650,
+ "label": 2,
+ "text": "Title: What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation \nAbstract: Technical Report UMIACS-TR-96-22 and CS-TR-3617 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract One of the most important aspects of any machine learning paradigm is how it scales according to problem size and complexity. Using a task with known optimal training error, and a pre-specified maximum number of training updates, we investigate the convergence of the backpropagation algorithm with respect to a) the complexity of the required function approximation, b) the size of the network in relation to the size required for an optimal solution, and c) the degree of noise in the training data. In general, for a) the solution found is worse when the function to be approximated is more complex, for b) oversized networks can result in lower training and generalization error in certain cases, and for c) the use of committee or ensemble techniques can be more beneficial as the level of noise in the training data is increased. For the experiments we performed, we do not obtain the optimal solution in any case. We further support the observation that larger networks can produce better training and generalization error using a face recognition example where a network with many more parameters than training points generalizes better than smaller networks. ",
+ "neighbors": [
+ 530,
+ 651,
+ 739,
+ 1025
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 651,
+ "label": 2,
+ "text": "Title: Lessons in Neural Network Training: Overfitting Lessons in Neural Network Training: Overfitting May be Harder\nAbstract: For many reasons, neural networks have become very popular AI machine learning models. Two of the most important aspects of machine learning models are how well the model generalizes to unseen data, and how well the model scales with problem complexity. Using a controlled task with known optimal training error, we investigate the convergence of the backpropagation (BP) algorithm. We find that the optimal solution is typically not found. Furthermore, we observe that networks larger than might be expected can result in lower training and generalization error. This result is supported by another real world example. We further investigate the training behavior by analyzing the weights in trained networks (excess degrees of freedom are seen to do little harm and to aid convergence), and contrasting the interpolation characteristics of multi-layer perceptron neural networks (MLPs) and polynomial models (overfitting behavior is very different the MLP is often biased towards smoother solutions). Finally, we analyze relevant theory outlining the reasons for significant practical differences. These results bring into question common beliefs about neural network training regarding convergence and optimal network size, suggest alternate guidelines for practical use (lower fear of excess degrees of freedom), and help to direct future work (e.g. methods for creation of more parsimonious solutions, importance of the MLP/BP bias and possibly worse performance of improved training algorithms). ",
+ "neighbors": [
+ 530,
+ 650,
+ 739
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 652,
+ "label": 1,
+ "text": "Title: Evolution in Time and Space The Parallel Genetic Algorithm \nAbstract: The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.",
+ "neighbors": [
+ 91,
+ 499,
+ 605,
+ 607,
+ 611,
+ 640,
+ 675,
+ 676,
+ 707,
+ 807,
+ 902
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 653,
+ "label": 0,
+ "text": "Title: Memory-Based Lexical Acquisition and Processing \nAbstract: Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described.",
+ "neighbors": [
+ 456,
+ 743,
+ 787,
+ 894,
+ 990
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 654,
+ "label": 1,
+ "text": "Title: An evolutionary tabu search algorithm and the NHL scheduling problem \nAbstract: We present in this paper a new evolutionary procedure for solving general optimization problems that combines efficiently the mechanisms of genetic algorithms and tabu search. In order to explore the solution space properly interaction phases are interspersed with periods of optimization in the algorithm. An adaptation of this search principle to the National Hockey League (NHL) problem is discussed. The hybrid method developed in this paper is well suited for Open Shop Scheduling problems (OSSP). The results obtained appear to be quite satisfactory. ",
+ "neighbors": [
+ 91,
+ 827,
+ 1296
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 655,
+ "label": 6,
+ "text": "Title: PFSA Modelling of Behavioural Sequences by Evolutionary Programming Rockhampton, Queensland. (1994) \"PFSA Modelling of Behavioural\nAbstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a \"can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator.",
+ "neighbors": [
+ 657
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 656,
+ "label": 6,
+ "text": "Title: Inductive Learning by Selection of Minimal Complexity Representations \nAbstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a \"can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator.",
+ "neighbors": [
+ 869,
+ 890,
+ 1245
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 657,
+ "label": 6,
+ "text": "Title: Assessment of candidate pfsa models induced from symbol datasets \nAbstract: The induction of the optimal finite state machine explanation from symbol strings is known to be at least NP-complete. However, satisfactory approximately optimal explanations may be found by the use of Evolutionary Programming. It has been shown that an information theoretic measure of finite state machine explanations can be used as the fitness function required for the evaluation of candidate explanations during the search for a near-optimal explanation. It is not obvious from the measure which class of explanation will be favoured over others during the search. By empirical studies it is possible to gain some insight into the dimensions the measure is optimising. In general, for probabilistic finite state machines, explanations assessed by a minimum message length estimator with the minimum number of transitions will be favoured over other explanations. The information measure will also favour explanations with uneven distributions of frequencies on transitions from a node suggesting that repeated sequences in symbol strings will be preferred as an explanation. Approximate bounds for acceptance of explanations and the length of string required for induction to be successful are also derived by considerations of the simplest possible and random explanations and their information measure. ",
+ "neighbors": [
+ 655
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 658,
+ "label": 2,
+ "text": "Title: Individual and Collective Prognostic Prediction \nAbstract: The prediction of survival time or recurrence time is an important learning problem in medical domains. The Recurrence Surface Approximation (RSA) method is a natural, effective method for predicting recurrence times using censored input data. This paper introduces the Survival Curve RSA (SC-RSA), an extension to the RSA approach which produces accurate predicted rates of recurrence, while maintaining accuracy on individual predicted recurrence times. The method is applied to the problem of breast cancer recurrence using two different datasets. ",
+ "neighbors": [
+ 301,
+ 721
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 659,
+ "label": 6,
+ "text": "Title: Selective sampling using the Query by Committee algorithm Running title: Selective sampling using Query by Committee \nAbstract: We analyze the \"query by committee\" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. Keywords: selective sampling, query learning, Bayesian Learning, experimental design fl Yoav Freund, Room 2B-428, AT&T Laboratories, 700 Mountain Ave., Murray Hill, NJ, 07974. Telephone:908-582-3164.",
+ "neighbors": [
+ 297
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 660,
+ "label": 3,
+ "text": "Title: Introduction to the Special Section on Knowledge-Based Construction of Probabilistic and Decision Models (IEEE Transactions\nAbstract: Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance ",
+ "neighbors": [
+ 364,
+ 533,
+ 1209
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 661,
+ "label": 6,
+ "text": "Title: LEARNING CONCEPTS BY ASKING QUESTIONS \nAbstract: Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. ",
+ "neighbors": [
+ 519,
+ 523,
+ 593,
+ 624
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 662,
+ "label": 2,
+ "text": "Title: Distributed Representations and Nested Compositional Structure \nAbstract: Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of \"energy\" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. ",
+ "neighbors": [
+ 146,
+ 637,
+ 761,
+ 890,
+ 1183
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 663,
+ "label": 5,
+ "text": "Title: An Efficient Subsumption Algorithm for Inductive Logic Programming \nAbstract: In this paper we investigate the efficiency of - subsumption (` ), the basic provability relation in ILP. As D ` C is NP-complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that -subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k-local Horn clauses and show that - subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k-local Horn clauses, an ILP-problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the -reduction algorithm, an essential part of every lgg ILP-learning algorithm, can be im proved by these ideas.",
+ "neighbors": [
+ 908
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 664,
+ "label": 1,
+ "text": "Title: Strongly Typed Genetic Programming \nAbstract: BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as \"closure\", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called \"strongly typed\" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate syntactically correct parse trees. Key concepts for STGP are generic functions, which are not true strongly typed functions but rather templates for classes of such functions, and generic data types, which are analogous. To illustrate STGP, we present four examples involving vector/matrix manipulation and list manipulation: (1) the multi-dimensional least-squares regression problem, (2) the multi-dimensional Kalman filter, (3) the list manipulation function NTH, and (4) the list manipulation function MAPCAR.",
+ "neighbors": [
+ 91,
+ 497,
+ 552,
+ 568,
+ 691,
+ 692,
+ 693,
+ 766,
+ 960
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 665,
+ "label": 5,
+ "text": "Title: Overcoming the myopia of inductive learning algorithms with RELIEFF \nAbstract: Current inductive machine learning algorithms typically use greedy search with limited looka-head. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELI-EFF, an extension of RELIEF developed by Kira and Rendell [10], [11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning. ",
+ "neighbors": [
+ 576,
+ 612,
+ 875,
+ 882,
+ 936,
+ 953
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 666,
+ "label": 4,
+ "text": "Title: Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition \nAbstract: This paper describes the MAXQ method for hierarchical reinforcement learning based on a hierarchical decomposition of the value function and derives conditions under which the MAXQ decomposition can represent the optimal value function. We show that for certain execution models, the MAXQ decomposition will produce better policies than Feudal Q learning.",
+ "neighbors": [
+ 324,
+ 426
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 667,
+ "label": 1,
+ "text": "Title: Causality in Genetic Programming \nAbstract: Causality relates changes in the structure of an object with the effects of such changes, that is changes in the properties or behavior of the object. This paper analyzes the concept of causality in Genetic Programming (GP) and suggests how it can be used in adapting control parameters for speeding up GP search. We first analyze the effects of crossover to show the weak causality of the GP representation and operators. Hierarchical GP approaches based on the discovery and evolution of functions amplify this phenomenon. However, selection gradually retains strongly causal changes. Causality is correlated to search space exploitation and is discussed in the context of the exploration-exploitation tradeoff. The results described argue for a bottom-up GP evolutionary thesis. Finally, new developments based on the idea of GP architecture evolution (Koza, 1994a) are discussed from the causality perspective. ",
+ "neighbors": [
+ 68,
+ 75,
+ 454,
+ 490,
+ 501,
+ 766,
+ 978
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 668,
+ "label": 3,
+ "text": "Title: Rationality and Intelligence \nAbstract: The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper outlines a gradual evolution in the formal conception of rationality that brings it closer to our informal conception of intelligence and simultaneously reduces the gap between theory and practice. Some directions for future research are indicated.",
+ "neighbors": [
+ 346,
+ 711
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 669,
+ "label": 2,
+ "text": "Title: In Estimating analogical similarity by dot-products of Holographic Reduced Representations. \nAbstract: Models of analog retrieval require a computationally cheap method of estimating similarity between a probe and the candidates in a large pool of memory items. The vector dot-product operation would be ideal for this purpose if it were possible to encode complex structures as vector representations in such a way that the superficial similarity of vector representations reflected underlying structural similarity. This paper describes how such an encoding is provided by Holographic Reduced Representations (HRRs), which are a method for encoding nested relational structures as fixed-width distributed representations. The conditions under which structural similarity is reflected in the dot-product rankings of ",
+ "neighbors": [
+ 637,
+ 761
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 670,
+ "label": 2,
+ "text": "Title: Analysis of the Convergence and Generalization of AA1 \nAbstract: that is based on Angluin's L fl algorithm. The algorithm maintains a model consistent with its past examples. When a new counterexample arrives it tries to extend the model in a minimal fashion. We conducted a set of experiments where random automata that represent different strategies were generated, and the algorithm tried to learn them based on prefix-closed samples of their behavior. The algorithm managed to learn very compact models that agree with the samples. The size of the sample had a small effect on the size of the model. The experimental results suggest that for random prefix-closed samples the algorithm behaves well. However, following Angluin's result on the difficulty of learning almost uniform complete samples [ An-gluin, 1978 ] , it is obvious that our algorithm does not solve the complexity issue of inferring a DFA from a general prefix-closed sample. We are currently looking for classes of prefix-closed samples in which US-L* behaves well. [ Carmel and Markovitch, 1994 ] D. Carmel and S. Markovitch. The M* algorithm: Incorporating opponent models into adversary search. Technical Report CIS report 9402, Technion, March 1994. [ Carmel and Markovitch, 1995 ] D. Carmel and S. Markovitch. Unsupervised learning of finite automata: A practical approach. Technical Report CIS report 9504, Technion, March 1995. [ Shoham and Tennenholtz, 1994 ] Y. Shoham and M. Tennenholtz. Co-Learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univrsity, Department of Computer Science, 1994. ",
+ "neighbors": [
+ 470,
+ 641,
+ 738
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 671,
+ "label": 6,
+ "text": "Title: Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms \nAbstract: The term \"bias\" is widely used|and with different meanings|in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance|rather than appropriate or inappropriate machine learning bias|is an important cause of poor performance for decision tree algorithms. ",
+ "neighbors": [
+ 601,
+ 724,
+ 1245
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 672,
+ "label": 0,
+ "text": "Title: An Explanation-Based Approach to Improve Retrieval in Case-Based Planning \nAbstract: When a case-based planner is retrieving a previous case in preparation for solving a new similar problem, it is often not aware of the implicit features of the new problem situation which determine if a particular case may be successfully applied. This means that some cases may be retrieved in error in that the case may fail to improve the planner's performance. Retrieval may be incrementally improved by detecting and explaining these failures as they occur. In this paper we provide a definition of case failure for the planner, dersnlp (derivation replay in snlp), which solves new problems by replaying its previous plan derivations. We provide EBL (explanation-based learning) techniques for detecting and constructing the reasons for the failure. We also describe how to organize a case library so as to incorporate this failure information as it is produced. Finally we present an empirical study which demonstrates the effectiveness of this approach in improving the performance of dersnlp.",
+ "neighbors": [
+ 348,
+ 636,
+ 906
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 673,
+ "label": 2,
+ "text": "Title: Statistical Evaluation of Neural Network Experiments: Minimum Requirements and Current Practice \nAbstract: ",
+ "neighbors": [
+ 674,
+ 739
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 674,
+ "label": 2,
+ "text": "Title: A Quantitative Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice \nAbstract: 190 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. 29% of them employ not even a single realistic or real learning problem. Only 8% of the articles present results for more than one problem using real world data. Furthermore, one third of all articles do not present any quantitative comparison with a previously known algorithm. These results suggest that we should strive for better assessment practices in neural network learning algorithm research. For the long-term benefit of the field, the publication standards should be raised in this respect and easily accessible collections of benchmark problems should be built. ",
+ "neighbors": [
+ 312,
+ 453,
+ 510,
+ 635,
+ 673,
+ 789
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 675,
+ "label": 1,
+ "text": "Title: The Role of Development in Genetic Algorithms \nAbstract: Technical Report Number CS94-394 Computer Science and Engineering, U.C.S.D. Abstract The developmental mechanisms transforming genotypic to phenotypic forms are typically omitted in formulations of genetic algorithms (GAs) in which these two representational spaces are identical. We argue that a careful analysis of developmental mechanisms is useful when understanding the success of several standard GA techniques, and can clarify the relationships between more recently proposed enhancements. We provide a framework which distinguishes between two developmental mechanisms | learning and maturation | while also showing several common effects on GA search. This framework is used to analyze how maturation and local search can change the dynamics of the GA. We observe that in some contexts, maturation and local search can be incorporated into the fitness evaluation, but illustrate reasons for considering them seperately. Further, we identify contexts in which maturation and local search can be distinguished from the fitness evaluation. ",
+ "neighbors": [
+ 70,
+ 308,
+ 652,
+ 1325
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 676,
+ "label": 1,
+ "text": "Title: The Role of Development in Genetic Algorithms \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
+ "neighbors": [
+ 91,
+ 462,
+ 579,
+ 652
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 677,
+ "label": 1,
+ "text": "Title: Data Analyses Using Simulated Breeding and Inductive Learning Methods \nAbstract: Marketing decision making tasks require the acquisition of efficient decision rules from noisy questionnaire data. Unlike popular learning-from-example methods, in such tasks, we must interpret the characteristics of the data without clear features of the data nor pre-determined evaluation criteria. The problem is how domain experts get simple, easy-to-understand, and accurate knowledge from noisy data. This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data: the acquired rules are simpler than the results from the direct application of inductive learning; a domain expert admits that they are easy to understand; and they are at the same level on the accuracy compared with the other methods. ",
+ "neighbors": [
+ 91,
+ 217,
+ 242,
+ 522,
+ 745
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 678,
+ "label": 0,
+ "text": "Title: Structural Similarity and Adaptation \nAbstract: Most commonly, case-based reasoning is applied in domains where attribute value representations of cases are sufficient to represent the features relevant to support classification, diagnosis or design tasks. Distance functions like the Hamming-distance or their transformation into similarity functions are applied to retrieve past cases to be used to generate the solution of an actual problem. Often, domain knowledge is available to adapt past solutions to new problems or to evaluate solutions. However, there are domains like architectural design or law in which structural case representations and corresponding structural similarity functions are needed. Often, the acquisition of adaptation knowledge seems to be impossible or rather requires an effort that is not manageable for fielded applications. Despite of this, humans use cases as the main source to generate adapted solutions. How to achieve this computationally? This paper presents a general approach to structural similarity assessment and adaptation. The approach allows to explore structural case representations and limited domain knowledge to support design tasks. It is exemplarily instantiated in three modules of the design assistant FABEL-Idea that generates adapted design solutions on the basis of prior CAD layouts.",
+ "neighbors": [
+ 309,
+ 512,
+ 806
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 679,
+ "label": 0,
+ "text": "Title: Acquiring Case Adaptation Knowledge: A Hybrid Approach \nAbstract: The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance. ",
+ "neighbors": [
+ 337,
+ 474,
+ 475,
+ 476,
+ 639,
+ 833
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 680,
+ "label": 0,
+ "text": "Title: Learning Problem-Solving Concepts by Reflecting on Problem Solving \nAbstract: Learning and problem solving are intimately related: problem solving determines the knowledge requirements of the reasoner which learning must fulfill, and learning enables improved problem-solving performance. Different models of problem solving, however, recognize different knowledge needs, and, as a result, set up different learning tasks. Some recent models analyze problem solving in terms of generic tasks, methods, and subtasks. These models require the learning of problem-solving concepts such as new tasks and new task decompositions. We view reflection as a core process for learning these problem-solving concepts. In this paper, we identify the learning issues raised by the task-structure framework of problem solving. We view the problem solver as an abstract device, and represent how it works in terms of a structure-behavior-function model which specifies how the knowledge and reasoning of the problem solver results in the accomplishment of its tasks. We describe how this model enables reflection, and how model-based reflection enables the reasoner to adapt its task structure to produce solutions of better quality. The Autognostic system illustrates this reflection process. ",
+ "neighbors": [
+ 300,
+ 340
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 681,
+ "label": 0,
+ "text": "Title: Supporting Combined Human and Machine Planning: An Interface for Planning by Analogical Reasoning \nAbstract: Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information.",
+ "neighbors": [
+ 337,
+ 475,
+ 476,
+ 478,
+ 479,
+ 942
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 682,
+ "label": 1,
+ "text": "Title: Evolutionary Programming and Evolution Strategies: Similarities and Differences \nAbstract: Evolutionary Programming and Evolution Strategies, rather similar representatives of a class of probabilistic optimization algorithms gleaned from the model of organic evolution, are discussed and compared to each other with respect to similarities and differences of their basic components as well as their performance in some experimental runs. Theoretical results on global convergence, step size control for a strictly convex, quadratic function and an extension of the convergence rate theory for Evolution Strategies are presented and discussed with respect to their implications on Evolutionary Programming. ",
+ "neighbors": [
+ 729,
+ 876,
+ 952
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 683,
+ "label": 1,
+ "text": "Title: Genetic algorithms with multi-parent recombination \nAbstract: In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better. ",
+ "neighbors": [
+ 78,
+ 91,
+ 415,
+ 482,
+ 729,
+ 794,
+ 844,
+ 851,
+ 876
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 684,
+ "label": 2,
+ "text": "Title: A Method of Combining Multiple Probabilistic Classifiers through Soft Competition on Different Feature Sets \nAbstract: A novel method is proposed for combining multiple probabilistic classifiers on different feature sets. In order to achieve the improved classification performance, a generalized finite mixture model is proposed as a linear combination scheme and implemented based on radial basis function networks. In the linear combination scheme, soft competition on different feature sets is adopted as an automatic feature rank mechanism so that different feature sets can be always simultaneously used in an optimal way to determine linear combination weights. For training the linear combination scheme, a learning algorithm is developed based on Expectation-Maximization (EM) algorithm. The proposed method has been applied to a typical real world problem, viz. speaker identification, in which different feature sets often need consideration simultaneously for robustness. Simulation results show that the proposed method yields good performance in speaker identification.",
+ "neighbors": [
+ 40,
+ 826,
+ 900,
+ 905
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 685,
+ "label": 2,
+ "text": "Title: Towards a General Distributed Platform for Learning and Generalization and Word Perfect Corp. 1 Introduction\nAbstract: Different learning models employ different styles of generalization on novel inputs. This paper proposes the need for multiple styles of generalization to support a broad application base. The Priority ASOCS model (Priority Adaptive Self-Organizing Concurrent System) is overviewed and presented as a potential platform which can support multiple generalization styles. PASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. The PASOCS can operate in either a data processing mode or a learning mode. During data processing mode, the system acts as a parallel hardware circuit. During learning mode, the PASOCS incorporates rules, with attached priorities, which represent the application being learned. Learning is accomplished in a distributed fashion in time logarithmic in the number of rules. The new model has significant learning time and space complexity improvements over previous models. Generalization in a learning system is at best always a guess. The proper style of generalization is application dependent. Thus, one style of generalization may not be sufficient to allow a learning system to support a broad spectrum of applications [14]. Current connectionist models use one specific style of generalization which is implicit in the learning algorithm. We suggest that the type of generalization used be a self-organizing parameter of the learning system which can be discovered as learning takes place. This requires a) a model which allows flexible generalization styles, and b) mechanisms to guide the system into the best style of generalization for the problem being learned. This paper overviews a learning model which seeks to efficiently support requirement a) above. The model is called Priority ASOCS (PASOCS) [9], which is a member of a class of models called ASOCS (Adaptive Self-Organizing Concurrent Systems) [5]. Section 2 of this paper gives an example of how different generalization techniques can approach a problem. Section 3 presents an overview of PASOCS. Section 4 illustrates how flexible generalization can be supported. Section 5 concludes the paper. ",
+ "neighbors": [
+ 470,
+ 641,
+ 738
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 686,
+ "label": 6,
+ "text": "Title: A New Metric-Based Approach to Model Selection \nAbstract: We introduce a new approach to model selection that performs better than the standard complexity-penalization and hold-out error estimation techniques in many cases. The basic idea is to exploit the intrinsic metric structure of a hypothesis space, as determined by the natural distribution of unlabeled training patterns, and use this metric as a reference to detect whether the empirical error estimates derived from a small (labeled) training sample can be trusted in the region around an empirically optimal hypothesis. Using simple metric intuitions we develop new geometric strategies for detecting overfitting and performing robust yet responsive model selection in spaces of candidate functions. These new metric-based strategies dramatically outperform previous approaches in experimental studies of classical polynomial curve fitting. Moreover, the technique is simple, efficient, and can be applied to most function learning tasks. The only requirement is access to an auxiliary collection of unlabeled training data. ",
+ "neighbors": [
+ 493,
+ 747,
+ 792,
+ 899
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 687,
+ "label": 1,
+ "text": "Title: Using Real-Valued Genetic Algorithms to Evolve Rule Sets for Classification \nAbstract: In this paper, we use a genetic algorithm to evolve a set of classification rules with real-valued attributes. We show how real-valued attribute ranges can be encoded with real-valued genes and present a new uniform method for representing don't cares in the rules. We view supervised classification as an optimization problem, and evolve rule sets that maximize the number of correct classifications of input instances. We use a variant of the Pitt approach to genetic-based machine learning system with a novel conflict resolution mechanism between competing rules within the same rule set. Experimental results demonstrate the effectiveness of our proposed approach on a benchmark wine classifier system. ",
+ "neighbors": [
+ 78,
+ 91,
+ 745
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 688,
+ "label": 1,
+ "text": "Title: Knowledge-Based Genetic Learning \nAbstract: Genetic algorithms have been proven to be a powerful tool within the area of machine learning. However, there are some classes of problems where they seem to be scarcely applicable, e.g. when the solution to a given problem consists of several parts that influence each other. In that case the classic genetic operators cross-over and mutation do not work very well thus preventing a good performance. This paper describes an approach to overcome this problem by using high-level genetic operators and integrating task specific but domain independent knowledge to guide the use of these operators. The advantages of this approach are shown for learning a rule base to adapt the parameters of an image processing operator path within the SOLUTION system.",
+ "neighbors": [
+ 91,
+ 634,
+ 745
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 689,
+ "label": 4,
+ "text": "Title: Team-Partitioned, Opaque-Transition Reinforcement Learning \nAbstract: In this paper, we present a novel multi-agent learning paradigm called team-partitioned, opaque-transition reinforcement learning (TPOT-RL). TPOT-RL introduces the concept of using action-dependent features to generalize the state space. In our work, we use a learned action-dependent feature space. TPOT-RL is an effective technique to allow a team of agents to learn to cooperate towards the achievement of a specific goal. It is an adaptation of traditional RL methods that is applicable in complex, non-Markovian, multi-agent domains with large state spaces and limited training opportunities. Multi-agent scenarios are opaque-transition, as team members are not always in full communication with one another and adversaries may affect the environment. Hence, each learner cannot rely on having knowledge of future state transitions after acting in the world. TPOT-RL enables teams of agents to learn effective policies with very few training examples even in the face of a large state space with large amounts of hidden state. The main responsible features are: dividing the learning task among team members, using a very coarse, action-dependent feature space, and allowing agents to gather reinforcement directly from observation of the environment. TPOT-RL is fully implemented and has been tested in the robotic soccer domain, a complex, multi-agent framework. This paper presents the algorithmic details of TPOT-RL as well as empirical results demonstrating the effectiveness of the developed multi-agent learning approach with learned features.",
+ "neighbors": [
+ 920,
+ 939
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 690,
+ "label": 2,
+ "text": "Title: Using Multiple Node Types to Improve the Performance of DMP (Dynamic Multilayer Perceptron) \nAbstract: This paper discusses a method for training multilayer perceptron networks called DMP2 (Dynamic Multilayer Perceptron 2). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The focus of this paper is on the effects of using multiple node types within the DMP framework. Simulation results show that DMP2 performs favorably in comparison with other learning algorithms, and that using multiple node types can be beneficial to network performance. ",
+ "neighbors": [
+ 470,
+ 903
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 691,
+ "label": 1,
+ "text": "Title: Entailment for Specification Refinement \nAbstract: Specification refinement is part of formal program derivation, a method by which software is directly constructed from a provably correct specification. Because program derivation is an intensive manual exercise used for critical software systems, an automated approach would allow it to be viable for many other types of software systems. The goal of this research is to determine if genetic programming (GP) can be used to automate the specification refinement process. The initial steps toward this goal are to show that a well-known proof logic for program derivation can be encoded such that a GP-based system can infer sentences in the logic for proof of a particular sentence. The results are promising and indicate that GP can be useful in aiding pro gram derivation.",
+ "neighbors": [
+ 568,
+ 664,
+ 692,
+ 1265,
+ 1311
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 692,
+ "label": 1,
+ "text": "Title: Type Inheritance in Strongly Typed Genetic Programming \nAbstract: This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing. ",
+ "neighbors": [
+ 497,
+ 552,
+ 568,
+ 664,
+ 691,
+ 693
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 693,
+ "label": 1,
+ "text": "Title: Augmenting Collective Adaptation with Simple Process Agents \nAbstract: We have integrated the distributed search of genetic programming based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. However, there is still considerable scope for improvement. In collective adaptation, search agents gather knowledge of their environment and deposit it in a central information repository. Process agents are then able to manipulate that focused knowledge, exploiting the exploration of the search agents. We examine the utility of increasing the capabilities of the centralized process agents. ",
+ "neighbors": [
+ 568,
+ 664,
+ 692,
+ 1311
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 694,
+ "label": 5,
+ "text": "Title: Concept Learning and the Problem of Small \nAbstract: ",
+ "neighbors": [
+ 460,
+ 716,
+ 840
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 695,
+ "label": 6,
+ "text": "Title: Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction \nAbstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.",
+ "neighbors": [
+ 219
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 696,
+ "label": 6,
+ "text": "Title: An Empirical Evaluation of Bagging and Boosting \nAbstract: An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it is more susceptible to noise and can quickly overfit a data set. ",
+ "neighbors": [
+ 792,
+ 809,
+ 826
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 697,
+ "label": 2,
+ "text": "Title: Hidden Markov Modeling of simultaneously recorded cells in the Associative cortex of behaving monkeys \nAbstract: A widely held idea regarding information processing in the brain is the cell-assembly hypothesis suggested by Hebb in 1949. According to this hypothesis, the basic unit of information processing in the brain is an assembly of cells, which can act briefly as a closed system, in response to a specific stimulus. This work presents a novel method of characterizing this supposed activity using a Hidden Markov Model. This model is able to reveal some of the underlying cortical network activity of behavioral processes. In our study the process in hand was the simultaneous activity of several cells recorded from the frontal cortex of behaving monkeys. Using such a model we were able to identify the behavioral mode of the animal and directly identify the corresponding collective network activity. Furthermore, the segmentation of the data into the discrete states also provides direct evidence for the state dependency of the short-time correlation functions between the same pair of cells. Thus, this cross-correlation depends on the network state of activity and not on local connectivity alone.",
+ "neighbors": [
+ 781
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 698,
+ "label": 3,
+ "text": "Title: Model Selection and Accounting for Model Uncertainty in Linear Regression Models \nAbstract: 1 Adrian E. Raftery is Professor of Statistics and Sociology, David Madigan is Assistant Professor of Statistics, and Jennifer Hoeting is a Ph.D. Candidate, all at the Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. The research of Raftery and Hoeting was supported by ONR Contract N-00014-91-J-1074. Madigan's research was partially supported by NSF grant no. DMS 92111627. The authors are grateful to Danika Lew for research assistance. ",
+ "neighbors": [
+ 47,
+ 448,
+ 488,
+ 530,
+ 570,
+ 618,
+ 648,
+ 757,
+ 850
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 699,
+ "label": 2,
+ "text": "Title: Categorical Perception in Facial Emotion Classification \nAbstract: We present an automated emotion recognition system that is capable of identifying six basic emotions (happy, surprise, sad, angry, fear, disgust) in novel face images. An ensemble of simple feed-forward neural networks are used to rate each of the images. The outputs of these networks are then combined to generate a score for each emotion. The networks were trained on a database of face images that human subjects consistently rated as portraying a single emotion. Such a system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from the same database. The neural network model exhibits categorical perception between some emotion pairs. A linear sequence of morph images is created between two expressions of an individual's face and this sequence is analyzed by the model. Sharp transitions in the output response vector occur in a single step in the sequence for some emotion pairs and not for others. We plan to us the model's response to limit and direct testing in determining if human subjects exhibit categorical perception in morph image sequences. ",
+ "neighbors": [
+ 544
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 700,
+ "label": 2,
+ "text": "Title: BLIND SEPARATION OF REAL WORLD AUDIO SIGNALS USING OVERDETERMINED MIXTURES \nAbstract: We discuss the advantages of using overdetermined mixtures to improve upon blind source separation algorithms that are designed to extract sound sources from acoustic mixtures. A study of the nature of room impulse responses helps us choose an adaptive filter architecture. We use ideal inverses of acquired room impulse responses to compare the effectiveness of different-sized separating filter configurations of various filter lengths. Using a multi-channel blind least-mean-square algorithm (MBLMS), we show that, by adding additional sensors, we can improve upon the separation of signals mixed with real world filters. ",
+ "neighbors": [
+ 331,
+ 848
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 701,
+ "label": 5,
+ "text": "Title: Producing More Comprehensible Models While Retaining Their Performance \nAbstract: Rissanen's Minimum Description Length (MDL) principle is adapted to handle continuous attributes in the Inductive Logic Programming setting. Application of the developed coding as a MDL pruning mechanism is devised. The behavior of the MDL pruning is tested in a synthetic domain with artificially added noise of different levels and in two real life problems | modelling of the surface roughness of a grinding workpiece and modelling of the mutagenicity of nitroaromatic compounds. Results indicate that MDL pruning is a successful parameter-free noise fighting tool in real-life domains since it acts as a safeguard against building too complex models while retaining the accuracy of the model. ",
+ "neighbors": [
+ 183,
+ 198,
+ 200,
+ 604
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 702,
+ "label": 1,
+ "text": "Title: Evolution of Non-Deterministic Incremental Algorithms as a New Approach for Search in State Spaces \nAbstract: Let us call a non-deterministic incremental algorithm one that is able to construct any solution to a combinatorial problem by selecting incrementally an ordered sequence of choices that defines this solution, each choice being made non-deterministically. In that case, the state space can be represented as a tree, and a solution is a path from the root of that tree to a leaf. This paper describes how the simulated evolution of a population of such non-deterministic incremental algorithms offers a new approach for the exploration of a state space, compared to other techniques like Genetic Algorithms (GA), Evolutionary Strategies (ES) or Hill Climbing. In particular, the efficiency of this method, implemented as the Evolving Non-Determinism (END) model, is presented for the sorting network problem, a reference problem that has challenged computer science. Then, we shall show that the END model remedies some drawbacks of these optimization techniques and even outperforms them for this problem. Indeed, some 16-input sorting networks as good as the best known have been built from scratch, and even a 25-year-old result for the 13-input problem has been improved by one comparator.",
+ "neighbors": [
+ 462,
+ 821,
+ 822,
+ 955,
+ 958
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 703,
+ "label": 2,
+ "text": "Title: Constructive Training Methods for Feedforward Neural Networks with Binary Weights \nAbstract: DIMACS Technical Report 95-35 August 1995 ",
+ "neighbors": [
+ 827
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 704,
+ "label": 4,
+ "text": "Title: USING A GENETIC ALGORITHM TO LEARN BEHAVIORS FOR AUTONOMOUS VEHICLES \nAbstract: Truly autonomous vehicles will require both projec - tive planning and reactive components in order to perform robustly. Projective components are needed for long-term planning and replanning where explicit reasoning about future states is required. Reactive components allow the system to always have some action available in real-time, and themselves can exhibit robust behavior, but lack the ability to expli - citly reason about future states over a long time period. This work addresses the problem of creating reactive components for autonomous vehicles. Creating reactive behaviors (stimulus-response rules) is generally difficult, requiring the acquisition of much knowledge from domain experts, a problem referred to as the knowledge acquisition bottleneck. SAMUEL is a system that learns reactive behaviors for autonomous agents. SAMUEL learns these behaviors under simulation, automating the process of creating stimulus-response rules and therefore reducing the bottleneck. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. Current work is investigating how well behaviors learned under simulation environments work in real world environments. In this paper, we describe SAMUEL, and describe behaviors that have been learned for simulated autonomous aircraft, autonomous underwater vehicles, and robots. These behaviors include dog fighting, missile evasion, track - ing, navigation, and obstacle avoidance. ",
+ "neighbors": [
+ 91,
+ 529,
+ 554,
+ 642,
+ 734
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 705,
+ "label": 2,
+ "text": "Title: BACKPROPAGATION SEPARATES WHERE PERCEPTRONS DO \nAbstract: Feedforward nets with sigmoidal activation functions are often designed by minimizing a cost criterion. It has been pointed out before that this technique may be outperformed by the classical perceptron learning rule, at least on some problems. In this paper, we show that no such pathologies can arise if the error criterion is of a threshold LMS type, i.e., is zero for values \"beyond\" the desired target values. More precisely, we show that if the data are linearly separable, and one considers nets with no hidden neurons, then an error function as above cannot have any local minima that are not global. Simulations of networks with hidden units are consistent with these results, in that often data which can be classified when minimizing a threshold LMS criterion may fail to be classified when using instead a simple LMS cost. In addition, the proof gives the following stronger result, under the stated hypotheses: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. This is a precise analogue of the Perceptron Learning Theorem. The results are then compared with the more classical pattern recognition problem of threshold LMS with linear activations, where no spurious local minima exist even for nonseparable data: here it is shown that even if using the threshold criterion, such bad local minima may occur, if the data are not separable and sigmoids are used. ",
+ "neighbors": [
+ 539,
+ 815
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 706,
+ "label": 3,
+ "text": "Title: Modelling Risk from a Disease in Time and Space \nAbstract: This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time- and space-varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a tentative re-analysis of Ohio lung cancer data 1968-88. Two approaches that adjust for unmeasured spatial covariates, particularly tobacco consumption, are described. The first includes random effects in the model to account for unobserved heterogeneity; the second adds a simple urbanization measure as a surrogate for smoking behaviour. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility in the southwest of the state may have caused increased levels of lung cancer there. However, we contend here that the data are inadequate for a proper investigation of this issue. fl Email: leo@stat.uni-muenchen.de",
+ "neighbors": [
+ 55,
+ 206
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 707,
+ "label": 1,
+ "text": "Title: The Schema Theorem and Price's Theorem \nAbstract: Holland's Schema Theorem is widely taken to be the foundation for explanations of the power of genetic algorithms (GAs). Yet some dissent has been expressed as to its implications. Here, dissenting arguments are reviewed and elaborated upon, explaining why the Schema Theorem has no implications for how well a GA is performing. Interpretations of the Schema Theorem have implicitly assumed that a correlation exists between parent and offspring fitnesses, and this assumption is made explicit in results based on Price's Covariance and Selection Theorem. Schemata do not play a part in the performance theorems derived for representations and operators in general. However, schemata re-emerge when recombination operators are used. Using Geiringer's recombination distribution representation of recombination operators, a \"missing\" schema theorem is derived which makes explicit the intuition for when a GA should perform well. Finally, the method of \"adaptive landscape\" analysis is examined and counterexamples offered to the commonly used correlation statistic. Instead, an alternative statistic | the transmission function in the fitness domain | is proposed as the optimal statistic for estimating GA performance from limited samples.",
+ "neighbors": [
+ 91,
+ 218,
+ 652,
+ 952,
+ 1014,
+ 1105,
+ 1145,
+ 1177
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 708,
+ "label": 5,
+ "text": "Title: Finding Accurate Frontiers: A Knowledge-Intensive Approach to Relational Learning \nAbstract: An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory. ",
+ "neighbors": [
+ 52,
+ 72,
+ 299,
+ 519,
+ 615,
+ 616,
+ 1159,
+ 1199
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 709,
+ "label": 1,
+ "text": "Title: EVOLVING NEURAL NETWORKS WITH COLLABORATIVE SPECIES \nAbstract: We present a coevolutionary architecture for solving decomposable problems and apply it to the evolution of artificial neural networks. Although this work is preliminary in nature it has a number of advantages over non-coevolutionary approaches. The coevolutionary approach utilizes a divide-and-conquer technique in which species representing simpler subtasks are evolved in separate instances of a genetic algorithm executing in parallel. Collaborations among the species are formed representing complete solutions. Species are created dynamically as needed. Results are presented in which the coevolutionary architecture produces higher quality solutions in fewer evolutionary trials when compared with an alternative non-coevolutionary approach on the problem of evolving cascade networks for parity computation. ",
+ "neighbors": [
+ 140,
+ 632,
+ 634,
+ 1107
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 710,
+ "label": 2,
+ "text": "Title: A Hypothesis-driven Constructive Induction Approach to Expanding Neural Networks \nAbstract: With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem.",
+ "neighbors": [
+ 485,
+ 881
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 711,
+ "label": 3,
+ "text": "Title: The BATmobile: Towards a Bayesian Automated Taxi \nAbstract: The problem of driving an autonomous vehicle in highway traffic engages many areas of AI research and has substantial economic significance. We describe work in progress on a new approach to this problem based on a decision-theoretic architecture using dynamic probabilistic networks. The architecture provides a sound solution to the problems of sensor noise, sensor failure, and uncertainty about the behavior of other vehicles and about the effects of one's own actions. Our approach has been implemented in a computer simulation system, and the autonomous vehicle successfully negotiates a variety of difficult situations.",
+ "neighbors": [
+ 458,
+ 560,
+ 668,
+ 791,
+ 1123,
+ 1243
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 712,
+ "label": 6,
+ "text": "Title: Automatic Parameter Selection by Minimizing Estimated Error \nAbstract: We address the problem of finding the parameter settings that will result in optimal performance of a given learning algorithm using a particular dataset as training data. We describe a \"wrapper\" method, considering determination of the best parameters as a discrete function optimization problem. The method uses best-first search and cross-validation to wrap around the basic induction algorithm: the search explores the space of parameter values, running the basic algorithm many times on training and holdout sets produced by cross-validation to get an estimate of the expected error of each parameter setting. Thus, the final selected parameter settings are tuned for the specific induction algorithm and dataset being studied. We report experiments with this method on 33 datasets selected from the UCI and StatLog collections using C4.5 as the basic induction algorithm. At a 90% confidence level, our method improves the performance of C4.5 on nine domains, degrades performance on one, and is statistically indistinguishable from C4.5 on the rest. On the sample of datasets used for comparison, our method yields an average 13% relative decrease in error rate. We expect to see similar performance improvements when using our method with other machine learning al gorithms.",
+ "neighbors": [
+ 118,
+ 133,
+ 242,
+ 367,
+ 747,
+ 1210
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 713,
+ "label": 2,
+ "text": "Title: Input-Output Analysis of Feedback Loops with Saturation Nonlinearities \nAbstract: The Feature Vector Editor offers a user-extensible environment for exploratory data analysis. Several empirical studies have applied this environment to the SHER-FACS International Conflict Management dataset. Current analysis techniques include boolean analysis, temporal analysis, and automatic rule learning. Implemented portably in ANSI Common Lisp and the Common Lisp Interface Manager (CLIM), the system features an advanced interface that makes it intuitive for people to manipulate data and discover significant relationships. The system encapsulates data within objects and defines generic protocols that mediate all interactions between data, users and analysis algorithms. Generic data protocols make possible rapid integration of new datasets and new analytical algorithms with heterogeneous data formats. More sophisticated research reformulates SHERFACS conflict codings as machine-parsable narratives suitable for processing into semantic representations by the RELATUS Natural Language System. Experiments with 244 SHERFACS cases demonstrated the feasibility of building knowledge bases from synthetic texts exceeding 600 pages. ",
+ "neighbors": [
+ 719,
+ 756,
+ 805,
+ 896
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 714,
+ "label": 6,
+ "text": "Title: The Sources of Increased Accuracy for Two Proposed Boosting Algorithms \nAbstract: We introduce two boosting algorithms that aim to increase the generalization accuracy of a given classifier by incorporating it as a level-0 component in a stacked generalizer. Both algorithms construct a complementary level-0 classifier that can only generate coarse hypotheses for the training data. We show that the two algorithms boost generalization accuracy on a representative collection of data sets. The two algorithms are distinguished in that one of them modifies the class targets of selected training instances in order to train the complementary classifier. We show that the two algorithms achieve approximately equal generalization accuracy, but that they create complementary classifiers that display different degrees of accuracy and diversity. Our study provides evidence that it may be useful to investigate families of boosting algorithms that incorporate varying levels of accuracy and diversity, so as to achieve an appropriate mix for a given task and domain. ",
+ "neighbors": [
+ 330,
+ 792
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 715,
+ "label": 1,
+ "text": "Title: Surgery \nAbstract: Object localization has applications in many areas of engineering and science. The goal is to spatially locate an arbitrarily-shaped object. In many applications, it is desirable to minimize the number of measurements collected for this purpose, while ensuring sufficient localization accuracy. In surgery, for example, collecting a large number of localization measurements may either extend the time required to perform a surgical procedure, or increase the radiation dosage to which a patient is exposed. Localization accuracy is a function of the spatial distribution of discrete measurements over an object when measurement noise is present. In [Simon et al., 1995a], metrics were presented to evaluate the information available from a set of discrete object measurements. In this study, new approaches to the discrete point data selection problem are described. These include hillclimbing, genetic algorithms (GAs), and Population-Based Incremental Learning (PBIL). Extensions of the standard GA and PBIL methods, which employ multiple parallel populations, are explored. The results of extensive empirical testing are provided. The results suggest that a combination of PBIL and hillclimbing result in the best overall performance. A computer-assisted surgical system which incorporates some of the methods presented in this paper is currently being evaluated in cadaver trials. Evolution-Based Methods for Selecting Point Data Shumeet Baluja was supported by a National Science Foundation Graduate Student Fellowship and a Graduate Student Fellowship from the National Aeronautics and Space Administration, administered by the Lyndon B. Johnson Space Center, Houston, TX. David Simon was partially supported by a National Science Foundation National Challenge grant (award IRI-9422734). for Object Localization: Applications to",
+ "neighbors": [
+ 91,
+ 197,
+ 240,
+ 731,
+ 732
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 716,
+ "label": 5,
+ "text": "Title: Fossil: A Robust Relational Learner \nAbstract: The research reported in this paper describes Fossil, an ILP system that uses a search heuristic based on statistical correlation. This algorithm implements a new method for learning useful concepts in the presence of noise. In contrast to Foil's stopping criterion, which allows theories to grow in complexity as the size of the training sets increases, we propose a new stopping criterion that is independent of the number of training examples. Instead, Fossil's stopping criterion depends on a search heuristic that estimates the utility of literals on a uniform scale. In addition we outline how this feature can be used for top-down pruning and present some preliminary results. ",
+ "neighbors": [
+ 198,
+ 217,
+ 239,
+ 342,
+ 694,
+ 1189,
+ 1190,
+ 1320
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 717,
+ "label": 1,
+ "text": "Title: Evolution of Pseudo-colouring Algorithms for Image Enhancement with Interactive Genetic Programming \nAbstract: Technical Report: CSRP-97-5 School of Computer Science The University of Birmingham Abstract In this paper we present an approach to the interactive development of programs for image enhancement with Genetic Programming (GP) based on pseudo-colour transformations. In our approach the user drives GP by deciding which individual should be the winner in tournament selection. The presence of the user does not only allow running GP without a fitness function but it also transforms GP into a very efficient search procedure capable of producing effective solutions to real-life problems in only hundreds of evaluations. In the paper we also propose a strategy to further reduce user interaction: we record the choices made by the user in interactive runs and we later use them to build a model which can replace him/her in longer runs. Experimental results with interactive GP and with our user-modelling strategy are also reported.",
+ "neighbors": [
+ 91,
+ 853,
+ 1185,
+ 1265
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 718,
+ "label": 0,
+ "text": "Title: A Functional Theory of Creative Reading \nAbstract: Reading is an area of human cognition which has been studied for decades by psychologists, education researchers, and artificial intelligence researchers. Yet, there still does not exist a theory which accurately describes the complete process. We believe that these past attempts fell short due to an incomplete understanding of the overall task of reading; namely, the complete set of mental tasks a reasoner must perform to read and the mechanisms that carry out these tasks. We present a functional theory of the reading process and argue that it represents a coverage of the task. The theory combines experimental results from psychology, artificial intelligence, education, and linguistics, along with the insights we have gained from our own research. This greater understanding of the mental tasks necessary for reading will enable new natural language understanding systems to be more flexible and more capable than earlier ones. Furthermore, we argue that creativity is a necessary component of the reading process and must be considered in any theory or system attempting to describe it. We present a functional theory of creative reading and a novel knowledge organization scheme that supports the creativity mechanisms. The reading theory is currently being implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a computer system which reads science fiction stories. fl This paper is part of the Georgia Institute of Technology, College of Computing, Technical Report series. ",
+ "neighbors": [
+ 167,
+ 278,
+ 340,
+ 854
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 719,
+ "label": 2,
+ "text": "Title: Global Stabilization of Linear Discrete-Time Systems with Bounded Feedback \nAbstract: This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections (\"single hidden layer neural networks\") of simple saturation functions. ",
+ "neighbors": [
+ 549,
+ 713,
+ 803,
+ 820
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 720,
+ "label": 2,
+ "text": "Title: Bilinear Separation of Two Sets in n-Space \nAbstract: The NP-complete problem of determining whether two disjoint point sets in the n-dimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solution, is processed by an iterative linear programming algorithm that terminates in a finite number of steps at a point satisfying a necessary optimality condition or at a global minimum. Encouraging computational experience on a number of test problems is reported.",
+ "neighbors": [
+ 76,
+ 129,
+ 240,
+ 721,
+ 737
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 721,
+ "label": 2,
+ "text": "Title: Feature Selection via Mathematical Programming \nAbstract: The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 16, 17, 33]. In its basic form the problem consists of eliminating as many of the features in a given problem as possible, while still carrying out a preassigned task with acceptable accuracy. Having a minimal number of features often leads to better generalization and simpler models that can be more easily interpreted. In the present work, our task is to discriminate between two given sets in an n-dimensional feature space by using as few of the given features as possible. We shall formulate this problem as a mathematical program with a parametric objective function that will attempt to achieve this task by generating a separating plane in a feature space of as small a dimension as possible while minimizing the average distance of misclassified points to the plane. One of the computational experiments that we carried out on our feature selection procedure showed its effectiveness, not only in minimizing the number of features selected, but also in quickly recognizing and removing spurious random features that were introduced. Thus, on the Wisconsin Prognosis Breast Cancer WPBC database [36] with a feature space of 32 dimensions and 6 random features added, one of our algorithms FSV (11) immediately removed the 6 random features as well as 28 of the original features resulting in a separating plane in a 4-dimensional reduced feature space. By using tenfold cross-validation [35], separation error in the 4-dimensional space was reduced 35.4% from the corresponding error in the original problem space. (See Section 3 for details.) We note that mathematical programming approaches to the feature selection problem have been recently proposed in [4, 22]. Even though the approach of [4] is based on an LPEC formulation, both the LPEC and its method of solution are different from the ones used here. The polyhedral concave minimization approach of [22] is principally involved with theoretical considerations of one specific algorithm and no cross-validatory results are given. Other effective computational applications of mathematical programming to neural networks are given in [30, 26]. ",
+ "neighbors": [
+ 129,
+ 240,
+ 242,
+ 658,
+ 720
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 722,
+ "label": 3,
+ "text": "Title: Factorial Hidden Markov Models \nAbstract: One of the basic probabilistic tools used for time series modeling is the hidden Markov model (HMM). In an HMM, information about the past of the time series is conveyed through a single discrete variable|the hidden state. We present a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. Both inference and learning in this model depend critically on computing the posterior probabilities of the hidden state variables given the observations. We present an exact algorithm for inference in this model, and relate it to the Forward-Backward algorithm for HMMs and to algorithms for more general belief networks. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or mean field theory. We also present a structured approximation in which the the state variables are decoupled, based on which we derive a tractable learning algorithm. Empirical comparisons suggest that these approximations are efficient and accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach's chorales and show that it outperforms HMMs in capturing the complex temporal patterns in this dataset.",
+ "neighbors": [
+ 457,
+ 471,
+ 525,
+ 546,
+ 560,
+ 791,
+ 799
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 723,
+ "label": 3,
+ "text": "Title: Exploiting Tractable Substructures in Intractable Networks \nAbstract: We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.",
+ "neighbors": [
+ 60,
+ 61,
+ 457,
+ 791,
+ 813,
+ 835,
+ 891
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 724,
+ "label": 6,
+ "text": "Title: A THEORY OF LEARNING CLASSIFICATION RULES \nAbstract: This chapter takes a different standpoint to address the problem of learning. We will here reason only in terms of probability, and make extensive use of the chain rule known as \"Bayes' rule\". A fast definition of the basics in probability is provided in appendix A for quick reference. Most of this chapter is a review of the methods of Bayesian learning applied to our modelling purposes. Some original analyses and comments are also provided in section 5.8, 5.11 and 5.12. There is a latent rivalry between \"Bayesian\" and \"Orthodox\" statistics. It is by no means our intention to enter this kind of controversy. We are perfectly willing to accept orthodox as well as unorthodox methods, as long as they are scientifically sound and provide good results when applied to learning tasks. The same disclaimer applies to the two frameworks presented here. They have been the object of heated controversy in the past 3 years in the neural networks community. We will not take side, but only present both frameworks, with their strong points and their weaknesses. In the context of this work, the \"Bayesian frameworks\" are especially interesting as the provide some continuous update rules that can be used during regularised cost minimisation to yield an automatic selection of the regularisation level. Unlike the methods presented in chapter 3, it is not necessary to try several regularisation levels and perform as many optimisations. The Bayesian framework is the only one in which training is achieved through a one-pass optimisation procedure. ",
+ "neighbors": [
+ 217,
+ 238,
+ 241,
+ 519,
+ 582,
+ 586,
+ 671,
+ 917,
+ 946,
+ 1036,
+ 1204
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 725,
+ "label": 3,
+ "text": "Title: Bits-back coding software guide \nAbstract: Abstract | In this document, I first review the theory behind bits-back coding (aka. free energy coding) (Frey and Hinton 1996) and then describe the interface to C-language software that can be used for bits-back coding. This method is a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed bits-back approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The software which I describe in this guide is easy to use and the source code is only a few pages long. I illustrate the bits-back coding software on a simple quantized Gaussian mixture problem. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 726,
+ "label": 2,
+ "text": "Title: Using Recurrent Neural Networks to Learn the Structure of Interconnection Networks \nAbstract: A modified Recurrent Neural Network (RNN) is used to learn a Self-Routing Interconnection Network (SRIN) from a set of routing examples. The RNN is modified so that it has several distinct initial states. This is equivalent to a single RNN learning multiple different synchronous sequential machines. We define such a sequential machine structure as augmented and show that a SRIN is essentially an Augmented Synchronous Sequential Machine (ASSM). As an example, we learn a small six-switch SRIN. After training we extract the net-work's internal representation of the ASSM and corresponding SRIN. fl This paper is adapted from ( Goudreau, 1993, Chapter 6 ) . A shortened version of this paper was published in ( Goudreau & Giles, 1993 ) . ",
+ "neighbors": [
+ 392,
+ 890,
+ 898
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 727,
+ "label": 2,
+ "text": "Title: On the Computational Utility of Consciousness \nAbstract: We propose a computational framework for understanding and modeling human consciousness. This framework integrates many existing theoretical perspectives, yet is sufficiently concrete to allow simulation experiments. We do not attempt to explain qualia (subjective experience), but instead ask what differences exist within the cognitive information processing system when a person is conscious of mentally-represented information versus when that information is unconscious. The central idea we explore is that the contents of consciousness correspond to temporally persistent states in a network of computational modules. Three simulations are described illustrating that the behavior of persistent states in the models corresponds roughly to the behavior of conscious states people experience when performing similar tasks. Our simulations show that periodic settling to persistent (i.e., conscious) states improves performance by cleaning up inaccuracies and noise, forcing decisions, and helping keep the system on track toward a solution.",
+ "neighbors": [
+ 515
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 728,
+ "label": 2,
+ "text": "Title: Rule Revision with Recurrent Neural Networks \nAbstract: Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect. (By incorrect, we mean that the rules were not the ones in the randomly generated grammar.) ",
+ "neighbors": [
+ 231,
+ 890
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 729,
+ "label": 1,
+ "text": "Title: Multi-parent Recombination \nAbstract: In this section we survey recombination operators that can apply more than two parents to create offspring. Some multi-parent recombination operators are defined for a fixed number of parents, e.g. have arity three, in some operators the number of parents is a random number that might be greater than two, and in yet other operators the arity is a parameter that can be set to an arbitrary integer number. We pay special attention to this latter type of operators and summarize results on the effect of operator arity on EA performance. ",
+ "neighbors": [
+ 415,
+ 682,
+ 683
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 730,
+ "label": 0,
+ "text": "Title: Correcting for Length Biasing in Conversational Case Scoring \nAbstract: Inference's conversational case-based reasoning (CCBR) approach, embedded in the CBR Content Navigator line of products, is susceptible to a bias in its case scoring algorithm. In particular, shorter cases tend to be given higher scores, assuming all other factors are held constant. This report summarizes our investigation for mediating this bias. We introduce an approach for eliminating this bias and evaluate how it affects retrieval performance for six case libraries. We also suggest explanations for these results, and note the limitations of our study. ",
+ "neighbors": [
+ 564
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 731,
+ "label": 1,
+ "text": "Title: Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms \nAbstract: We investigate the effectiveness of stochastic hillclimbing as a baseline for evaluating the performance of genetic algorithms (GAs) as combinatorial function optimizers. In particular, we address four problems to which GAs have been applied in the literature: the maximum cut problem, Koza's 11-multiplexer problem, MDAP (the Multiprocessor Document Allocation Problem), and the jobshop problem. We demonstrate that simple stochastic hillclimbing methods are able to achieve results comparable or superior to those obtained by the GAs designed to address these four problems. We further illustrate, in the case of the jobshop problem, how insights obtained in the formulation of a stochastic hillclimbing algorithm can lead to improvements in the encoding used by a GA. fl Department of Computer Science, University of California at Berkeley. Supported by a NASA Graduate Fellowship. This paper was written while the author was a visiting researcher at the Ecole Normale Superieure-rue d'Ulm, Groupe de BioInformatique, France. E-mail: juels@cs.berkeley.edu y Department of Mathematics, University of California at Berkeley. Supported by an NDSEG Graduate Fellowship. E-mail: wattenbe@math.berkeley.edu ",
+ "neighbors": [
+ 91,
+ 197,
+ 715,
+ 1154
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 732,
+ "label": 1,
+ "text": "Title: Distribution Category: A Parallel Genetic Algorithm for the Set Partitioning Problem \nAbstract: This work was supported by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38. It was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate School of the Illinois Institute of Technology, May 1994 (thesis adviser: Dr. Tom Christopher). ",
+ "neighbors": [
+ 91,
+ 420,
+ 579,
+ 607,
+ 622,
+ 627,
+ 715,
+ 880
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 733,
+ "label": 6,
+ "text": "Title: Improving the Accuracy and Speed of Support Vector Machines \nAbstract: Support Vector Learning Machines (SVM) are finding application in pattern recognition, regression estimation, and operator inversion for ill-posed problems. Against this very general backdrop, any methods for improving the generalization performance, or for improving the speed in test phase, of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem. The method for improving generalization performance (the \"virtual support vector\" method) does so by incorporating known invariances of the problem. This method achieves a drop in the error rate on 10,000 NIST test digit images of 1.4% to 1.0%. The method for improving the speed (the \"reduced set\" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine. The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance, achieving 1.1% error. The virtual support vector method is applicable to any SVM problem with known invariances. The reduced set method is applicable to any support vector machine. ",
+ "neighbors": [
+ 354
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 734,
+ "label": 1,
+ "text": "Title: ROBO-SHEPHERD: LEARNING COMPLEX ROBOTIC BEHAVIORS \nAbstract: This paper reports on recent results using genetic algorithms to learn decision rules for complex robot behaviors. The method involves evaluating hypothetical rule sets on a simulator and applying simulated evolution to evolve more effective rules. The main contributions of this paper are (1) the task learned is a complex behavior involving multiple mobile robots, and (2) the learned rules are verified through experiments on operational mobile robots. The case study involves a shepherding task in which one mobile robot attempts to guide another robot to a specified area. ",
+ "neighbors": [
+ 529,
+ 704
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 735,
+ "label": 5,
+ "text": "Title: Learning Trees and Rules with Set-valued Features \nAbstract: In most learning systems examples are represented as fixed-length \"feature vectors\", the components of which are either real numbers or nominal values. We propose an extension of the feature-vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a feature vector with size=small, species=canis-familiaris and color=fwhite,blackg. Since we make no assumptions about the number of possible set elements, this extension of the traditional feature-vector representation is closely connected to Blum's \"infinite attribute\" representation. We argue that many decision tree and rule learning algorithms can be easily extended to set-valued features. We also show by example that many real-world learning problems can be efficiently and naturally represented with set-valued features; in particular, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features. ",
+ "neighbors": [
+ 198,
+ 374,
+ 796,
+ 907
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 736,
+ "label": 2,
+ "text": "Title: Modeling Volatility using State Space Models \nAbstract: In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude due to their ignoring the distinction between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. ",
+ "neighbors": [
+ 357,
+ 388,
+ 613
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 737,
+ "label": 2,
+ "text": "Title: Misclassification Minimization \nAbstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that \"counts\" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.",
+ "neighbors": [
+ 76,
+ 127,
+ 240,
+ 720
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 738,
+ "label": 2,
+ "text": "Title: Priority ASOCS ASOCS models have two significant advantages over other learning models: \nAbstract: This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. An ASOCS can operate in either a data processing mode or a learning mode. During data processing mode, an ASOCS acts as a parallel hardware circuit. During learning mode, an ASOCS incorporates a rule expressed as a Boolean conjunction in a distributed fashion in time logarithmic in the number of rules. This paper proposes a learning algorithm and architecture for Priority ASOCS. This new ASOCS model uses rules with priorities. The new model has significant learning time and space complexity improvements over previous models. Non-von Neumann architectures such as neural networks attack the word-at-a-time bottleneck of traditional computing systems [1]. Neural networks learn input-output mappings using highly distributed processing and memory [10,11,12]. Their numerous simple processing elements with modifiable weighted links permit a high degree of parallelism. A typical neural network has fixed topology. It learns by modifying weighted links between nodes. A new class of connectionist architectures has been proposed called ASOCS (Adaptive Self-Organizing Concurrent Systems) [4,5]. ASOCS models support efficient computation through self-organized learning and parallel execution. Learning is done through the incremental presentation of rules and/or examples. ASOCS models learn by modifying their topology. Data types include Boolean and multi-state variables; recent models support analog variables. The model incorporates rules into an adaptive logic network in a parallel and self organizing fashion. In processing mode, ASOCS supports fully parallel execution on actual inputs according to the learned rules. The adaptive logic network acts as a parallel hardware circuit during execution, mapping n input boolean vectors into m output boolean vectors, in a combinatoric fashion. The overall philosophy of ASOCS follows the high level goals of current neural network models. However, the mechanisms of learning and execution vary significantly. The ASOCS logic network is topologically dynamic with the network growing to efficiently fit the specific application. Current ASOCS models are based on digital nodes. ASOCS also supports use of symbolic and heuristic learning mechanisms, thus combining the parallelism and distributed nature of connectionist computing with the potential power of AI symbolic learning. A proof of concept ASOCS chip has been developed [2]. ",
+ "neighbors": [
+ 171,
+ 473,
+ 595,
+ 614,
+ 670,
+ 685
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 739,
+ "label": 2,
+ "text": "Title: On the Distribution of Performance from Multiple Neural Network Trials, On the Distribution of Performance\nAbstract: Andrew D. Back was with the Department of Electrical and Computer Engineering, University of Queensland. St. Lucia, Australia. He is now with the Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan Abstract The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance ",
+ "neighbors": [
+ 647,
+ 650,
+ 651,
+ 673
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 740,
+ "label": 3,
+ "text": "Title: [6] D. Geiger. Graphoids: a qualitative framework for probabilistic inference. An introduction to algorithms for\nAbstract: Andrew D. Back was with the Department of Electrical and Computer Engineering, University of Queensland. St. Lucia, Australia. He is now with the Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan Abstract The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance ",
+ "neighbors": [
+ 152,
+ 861
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 741,
+ "label": 1,
+ "text": "Title: Environmental Effects on Minimal Behaviors in the Minimat World \nAbstract: The structure of an environment affects the behaviors of the organisms that have evolved in it. How is that structure to be described, and how can its behavioral consequences be explained and predicted? We aim to establish initial answers to these questions by simulating the evolution of very simple organisms in simple environments with different structures. Our artificial creatures, called \"minimats,\" have neither sensors nor memory and behave solely by picking amongst the actions of moving, eating, reproducing, and sitting, according to an inherited probability distribution. Our simulated environments contain only food (and multiple minimats) and are structured in terms of their spatial and temporal food density and the patchiness with which the food appears. Changes in these environmental parameters affect the evolved behaviors of minimats in different ways, and all three parameters are of importance in describing the minimat world. One of the most useful behavioral strategies that evolves is \"looping\" movement, which allows minimats-despite their lack of internal state-to match their behavior to the temporal (and spatial) structure of their environment. Ultimately we find that minimats construct their own environments through their individual behaviors, making the study of the impact of global environment structure on individual behavior much more complex. ",
+ "neighbors": [
+ 123
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 742,
+ "label": 3,
+ "text": "Title: Causal diagrams for empirical research \nAbstract: The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, structural equations, treatment effect. ",
+ "neighbors": [
+ 141,
+ 451,
+ 557,
+ 895,
+ 1125,
+ 1135
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 743,
+ "label": 2,
+ "text": "Title: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features \nAbstract: In the past, nearest neighbor algorithms for learning from examples have worked best in domains in which all features had numeric values. In such domains, the examples can be treated as points and distance metrics can use standard definitions. In symbolic domains, a more sophisticated treatment of the feature space is required. We introduce a nearest neighbor algorithm for learning in domains with symbolic features. Our algorithm calculates distance tables that allow it to produce real-valued distances between instances, and attaches weights to the instances to further modify the structure of feature space. We show that this technique produces excellent classification accuracy on three problems that have been studied by machine learning researchers: predicting protein secondary structure, identifying DNA promoter sequences, and pronouncing English text. Direct experimental comparisons with the other learning algorithms show that our nearest neighbor algorithm is comparable or superior in all three domains. In addition, our algorithm has advantages in training speed, simplicity, and perspicuity. We conclude that experimental evidence favors the use and continued development of nearest neighbor algorithms for domains such as the ones studied here. ",
+ "neighbors": [
+ 456,
+ 548,
+ 583,
+ 591,
+ 630,
+ 653,
+ 793,
+ 886,
+ 917
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 744,
+ "label": 6,
+ "text": "Title: Supervised and Unsupervised Discretization of Continuous Features \nAbstract: Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify defining characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm significantly improved if features were discretized in advance; in our experiments, the performance never significantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretiz ing features.",
+ "neighbors": [
+ 583,
+ 1067
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 745,
+ "label": 1,
+ "text": "Title: Using Genetic Algorithms for Supervised Concept Learning \nAbstract: Genetic Algorithms (GAs) have traditionally been used for non-symbolic learning tasks. In this chapter we consider the application of a GA to a symbolic learning task, supervised concept learning from examples. A GA concept learner (GABL) is implemented that learns a concept from a set of positive and negative examples. GABL is run in a batch-incremental mode to facilitate comparison with an incremental concept learner, ID5R. Preliminary results support that, despite minimal system bias, GABL is an effective concept learner and is quite competitive with ID5R as the target concept increases in complexity. ",
+ "neighbors": [
+ 91,
+ 462,
+ 643,
+ 677,
+ 687,
+ 688,
+ 817,
+ 842,
+ 945
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 746,
+ "label": 1,
+ "text": "Title: THE OPTIONS DESIGN EXPLORATION SYSTEM Reference Manual and User Guide Version B2.1 \nAbstract: Genetic Algorithms (GAs) have traditionally been used for non-symbolic learning tasks. In this chapter we consider the application of a GA to a symbolic learning task, supervised concept learning from examples. A GA concept learner (GABL) is implemented that learns a concept from a set of positive and negative examples. GABL is run in a batch-incremental mode to facilitate comparison with an incremental concept learner, ID5R. Preliminary results support that, despite minimal system bias, GABL is an effective concept learner and is quite competitive with ID5R as the target concept increases in complexity. ",
+ "neighbors": [
+ 91,
+ 462,
+ 941
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 747,
+ "label": 3,
+ "text": "Title: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection \nAbstract: We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leave-one-out cross-validation. We report on a large-scale experiment|over half a million runs of C4.5 and a Naive-Bayes algorithm|to estimate the effects of different parameters on these algorithms on real-world datasets. For cross-validation, we vary the number of folds and whether the folds are stratified or not; for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, the best method to use for model selection is ten-fold stratified cross validation, even if computation power allows using more folds. ",
+ "neighbors": [
+ 514,
+ 585,
+ 592,
+ 686,
+ 712,
+ 749,
+ 751,
+ 899
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 748,
+ "label": 3,
+ "text": "Title: Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid \nAbstract: Naive-Bayes induction algorithms were previously shown to be surprisingly accurate on many classification tasks even when the conditional independence assumption on which they are based is violated. However, most studies were done on small databases. We show that in some larger databases, the accuracy of Naive-Bayes does not scale up as well as decision trees. We then propose a new algorithm, NBTree, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-tree nodes contain uni-variate splits as regular decision-trees, but the leaves contain Naive-Bayesian classifiers. The approach retains the interpretability of Naive-Bayes and decision trees, while resulting in classifiers that frequently outperform both constituents, especially in the larger databases tested. ",
+ "neighbors": [
+ 587
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 749,
+ "label": 6,
+ "text": "Title: MLC A Machine Learning Library in C \nAbstract: We present MLC ++ , a library of C ++ classes and tools for supervised Machine Learning. While MLC ++ provides general learning algorithms that can be used by end users, the main objective is to provide researchers and experts with a wide variety of tools that can accelerate algorithm development, increase software reliability, provide comparison tools, and display information visually. More than just a collection of existing algorithms, MLC ++ is an attempt to extract commonalities of algorithms and decompose them for a unified view that is simple, coherent, and extensible. In this paper we discuss the problems MLC ++ aims to solve, the design of MLC ++ , and the current functionality. ",
+ "neighbors": [
+ 583,
+ 747,
+ 1211
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 750,
+ "label": 3,
+ "text": "Title: Computing Nonparametric Hierarchical Models \nAbstract: Bayesian models involving Dirichlet process mixtures are at the heart of the modern nonparametric Bayesian movement. Much of the rapid development of these models in the last decade has been a direct result of advances in simulation-based computational methods. Some of the very early work in this area, circa 1988-1991, focused on the use of such nonparametric ideas and models in applications of otherwise standard hierarchical models. This chapter provides some historical review and perspective on these developments, with a prime focus on the use and integration of such nonparametric ideas in hierarchical models. We illustrate the ease with which the strict parametric assumptions common to most standard Bayesian hierarchical models can be relaxed to incorporate uncertainties about functional forms using Dirichlet process components, partly enabled by the approach to computation using MCMC methods. The resulting methology is illustrated with two examples taken from an unpublished 1992 report on the topic.",
+ "neighbors": [
+ 498,
+ 535,
+ 578,
+ 923
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 751,
+ "label": 6,
+ "text": "Title: An Analysis of Bayesian Classifiers (1988), involves the formulation of average-case models for specific algorithms\nAbstract: In this paper we present an average-case analysis of the Bayesian classifier, a simple induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, and independent, noise-free Boolean attributes. We calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions and then use this to compute the probability of correct classification over the instance space. The analysis takes into account the number of training instances, the number of attributes, the distribution of these attributes, and the level of class noise. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning. One goal of research in machine learning is to discover principles that relate algorithms and domain characteristics to behavior. To this end, many researchers have carried out systematic experimentation with natural and artificial domains in search of empirical regularities (e.g., Kibler & Langley, 1988). Others have focused on theoretical analyses, often within the paradigm of probably approximately correct learning (e.g., Haus-sler, 1990). However, most experimental studies are based only on informal analyses of the learning task, whereas most formal analyses address the worst case, and thus bear little relation to empirical results. ber of attributes, and the class and attribute frequencies, they obtain predictions about the behavior of induction algorithms and used experiments to check their analyses. 1 However, their research does not focus on algorithms typically used by the experimental and practical sides of machine learning, and it is important that average-case analyses be extended to such methods. Recently, there has been growing interest in probabilistic approaches to inductive learning. For example, Fisher (1987) has described Cobweb, an incremental algorithm for conceptual clustering that draws heavily on Bayesian ideas, and the literature reports a number of systems that build on this work (e.g., Allen & Lang-ley, 1990; Iba & Gennari, 1991; Thompson & Langley, 1991). Cheeseman et al. (1988) have outlined Auto-Class, a nonincremental system that uses Bayesian methods to cluster instances into groups, and other researchers have focused on the induction of Bayesian inference networks (e.g., Cooper & Kerskovits, 1991). These recent Bayesian learning algorithms are complex and not easily amenable to analysis, but they share a common ancestor that is simpler and more tractable. This supervised algorithm, which we refer to simply as a Bayesian classifier, comes originally from work in pattern recognition (Duda & Hart, 1973). The method stores a probabilistic summary for each class; this summary contains the conditional probability of each attribute value given the class, as well as the probability (or base rate) of the class. This data structure approximates the representational power of a perceptron; it describes a single decision boundary through the instance space. When the algorithm encounters a new instance, it updates the probabilities stored with the specified class. Neither the order of training instances nor the occurrence of classification errors have any effect on this process. When given a test instance, the classifier uses an evaluation function (which we describe in detail later) to rank the alter ",
+ "neighbors": [
+ 630,
+ 747,
+ 933,
+ 1342
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 752,
+ "label": 2,
+ "text": "Title: ADAPTIVE REGULARIZATION \nAbstract: Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work we provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient descent in the estimated generalization error with respect to the regularization parameters. The scheme is implemented in our Designer Net framework for network training and pruning, i.e., is based on the diagonal Hessian approximation. The scheme does not require essential computational overhead in addition to what is needed for training and pruning. The viability of the approach is demonstrated in an experiment concerning prediction of the chaotic Mackey-Glass series. We find that the optimized weight decays are relatively large for densely connected networks in the initial pruning phase, while they decrease as pruning proceeds. ",
+ "neighbors": [
+ 86,
+ 240
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 753,
+ "label": 2,
+ "text": "Title: Growing Layers of Perceptrons: Introducing the Extentron Algorithm \nAbstract: vations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best split of the examples, and (2) it is always possible for the perceptron to build a hyper- plane that separates at least one example from all the rest. We describe the Extentron which grows multi-layer networks capable of distinguishing non- linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large prob - lems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems. ",
+ "neighbors": [
+ 472
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 754,
+ "label": 6,
+ "text": "Title: Randomly Fallible Teachers: Learning Monotone DNF with an Incomplete Membership Oracle \nAbstract: We introduce a new fault-tolerant model of algorithmic learning using an equivalence oracle and an incomplete membership oracle, in which the answers to a random subset of the learner's membership queries may be missing. We demonstrate that, with high probability, it is still possible to learn monotone DNF formulas in polynomial time, provided that the fraction of missing answers is bounded by some constant less than one. Even when half the membership queries are expected to yield no information, our algorithm will exactly identify m-term, n-variable monotone DNF formulas with an expected O(mn 2 ) queries. The same task has been shown to require exponential time using equivalence queries alone. We extend the algorithm to handle some one-sided errors, and discuss several other possible error models. It is hoped that this work may lead to a better understanding of the power of membership queries and the effects of faulty teachers on query models of concept learning. ",
+ "neighbors": [
+ 392,
+ 572,
+ 767,
+ 808
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 755,
+ "label": 0,
+ "text": "Title: Use of Mental Models for Constraining Index Learning in Experience-Based Design \nAbstract: The power of the case-based method comes from the ability to retrieve the \"right\" case when a new problem is specified. This implies that learning the \"right\" indices to a case before storing it for potential reuse is crucial for the success of the method. A hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary, and learning the right level of generalization. In this paper we show how the use of structure-behavior-function (SBF) models constrains index learning in the context of experience-based design of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design, together with a specification of the task for which the design case might be reused, provides the vocabulary for indexing the design case in memory. We also discuss how the prior design experiences stored in case-memory help to determine the level of index generalization. The KRITIK2 system implements and evaluates the model-based method for learning indices to design cases.",
+ "neighbors": [
+ 598,
+ 599,
+ 913
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 756,
+ "label": 2,
+ "text": "Title: References Linear Controller Design, Limits of Performance, \"The parallel projection operators of a nonlinear feedback\nAbstract: 13] Yang, Y., H.J. Sussmann, and E.D. Sontag, \"Stabilization of linear systems with bounded controls,\" in Proc. Nonlinear Control Systems Design Symp., Bordeaux, June 1992 (M. Fliess, Ed.), IFAC Publications, pp. 15-20. Journal version to appear in IEEE Trans. Autom. Control . ",
+ "neighbors": [
+ 713,
+ 805
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 757,
+ "label": 3,
+ "text": "Title: Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Log-linear Models \nAbstract: The Bayesian approach to comparing models involves calculating the posterior probability of each plausible model. For high-dimensional contingency tables, the set of plausible models is very large. We focus attention on reversible jump Markov chain Monte Carlo (Green, 1995) and develop strategies for calculating posterior probabilities of hierarchical, graphical or decomposable log-linear models. Even for tables of moderate size, these sets of models may be very large. The choice of suitable prior distributions for model parameters is also discussed in detail, and two examples are presented. For the first example, a 2 fi 3 fi 4 table, the model probabilities calculated using our reversible jump approach are compared with model probabilities calculated exactly or by using an alternative approximation. The second example is a 2 6 contingency table for which exact methods are infeasible, due to the large number of possible models. ",
+ "neighbors": [
+ 47,
+ 648,
+ 698
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 758,
+ "label": 0,
+ "text": "Title: Learning Indices for Schema Selection \nAbstract: In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system's memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.",
+ "neighbors": [
+ 599,
+ 855,
+ 857
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 759,
+ "label": 2,
+ "text": "Title: FONN: Combining First Order Logic with Connectionist Learning \nAbstract: This paper presents a neural network architecture that can manage structured data and refine knowledge bases expressed in a first order logic language. The presented framework is well suited to classification problems in which concept de scriptions depend upon numerical features of the data. In fact, the main goal of the neural architecture is that of refining the numerical part of the knowledge base, without changing its structure. In particular, we discuss a method to translate a set of classification rules into neural computation units. Here, we focus our attention on the translation method and on algorithms to refine network weights on struc tured data. The classification theory to be refined can be manually handcrafted or automatically acquired by a symbolic relational learning system able to deal with numerical features. As a matter of fact, the primary goal is to bring into a neural network architecture the capability of dealing with structured data of unrestricted size, by allowing to dynamically bind the classification rules to different items occur ring in the input data. An extensive experimentation on a challenging artificial case study shows that the network converges quite fastly and generalizes much better than propositional learners on an equivalent task definition. ",
+ "neighbors": [
+ 357,
+ 931,
+ 1339
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 760,
+ "label": 1,
+ "text": "Title: Culling Teaching -1 Culling and Teaching in Neuro-evolution \nAbstract: The evolving population of neural nets contains information not only in terms of genes, but also in the collection of behaviors of the population members. Such information can be thought of as a kind of culture of the population. Two ways of exploiting that culture are explored in this paper: (1) Culling overlarge litters: Generate a large number of offspring with different crossovers, quickly evaluate them by comparing their performance to the population, and throw away those that appear poor. (2) Teaching: Use backpropagation to train offspring toward the performance of the population. Both techniques result in faster, more effective neuro-evolution, and they can be effectively combined, as is demonstrated on the inverted pendulum problem. Additional methods of cultural exploitation are possible and will be studied in future work. These results suggest that cultural exploitation is a powerful idea that allows leveraging several aspects of the genetic algorithm.",
+ "neighbors": [
+ 169,
+ 542,
+ 1197
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 761,
+ "label": 0,
+ "text": "Title: The Structure-Mapping Engine: Algorithm and Examples \nAbstract: This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a \"tool kit\" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact forbus@ils.nwu.edu ",
+ "neighbors": [
+ 41,
+ 182,
+ 468,
+ 599,
+ 637,
+ 662,
+ 669,
+ 932,
+ 935
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 762,
+ "label": 0,
+ "text": "Title: Modeling Invention by Analogy in ACT-R \nAbstract: We investigate some aspects of cognition involved in invention, more precisely in the invention of the telephone by Alexander Graham Bell. We propose the use of the Structure-Behavior-Function (SBF) language for the representation of invention knowledge; we claim that because SBF has been shown to support a wide range of reasoning about physical devices, it constitutes a plausible account of how an inventor might represent knowledge of an invention. We further propose the use of the ACT-R architecture for the implementation of this model. ACT-R has been shown to very precisely model a wide range of human cognition. We draw upon the architecture for execution of productions and matching of declarative knowledge through spreading activation. Thus we present a model which combines the well-established cognitive validity of ACT-R with the powerful, specialized model-based reasoning methods facilitated by SBF. ",
+ "neighbors": [
+ 599,
+ 649,
+ 913,
+ 919
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 763,
+ "label": 2,
+ "text": "Title: Constraint Tangent Distance for On-line Character Recognition \nAbstract: In on-line character recognition we can observe two kinds of intra-class variations: small geometric deformations and completely different writing styles. We propose a new approach to deal with these problems by defining an extension of tangent distance [9], well known in off-line character recognition. The system has been implemented with a k-nearest neighbor classifier and a so called diabolo classifier [6] respectively. Both classifiers are invariant under transformations like rotation, scale or slope and can deal with variations in stroke order and writing direction. Results are presented for our digit database with more than 200 writers. ",
+ "neighbors": [
+ 387
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 764,
+ "label": 6,
+ "text": "Title: On the Complexity of Function Learning \nAbstract: The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range f0; 1g. Much less is known about the theory of learning functions with a larger range such as IN or IR. In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models. We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any \"learning\" model). Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes. Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from IR d into IR. Previously, the class of linear functions from IR d into IR was the only class of functions with multi-dimensional domain that was known to be learnable within the rigorous framework of a formal model for on-line learning. Finally we give a sufficient condition for an arbitrary class F of functions from IR into IR that allows us to learn the class of all functions that can be written as the pointwise maximum of k functions from F . This allows us to exhibit a number of further nontrivial classes of functions from IR into IR for which there exist efficient learning algorithms. ",
+ "neighbors": [
+ 255,
+ 346,
+ 874,
+ 927
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 765,
+ "label": 2,
+ "text": "Title: Extracting Comprehensible Concept Representations from Trained Neural Networks \nAbstract: Although they are applicable to a wide array of problems, and have demonstrated good performance on a number of difficult, real-world tasks, neural networks are not usually applied to problems in which comprehensibility of the acquired concepts is important. The concept representations formed by neural networks are hard to understand because they typically involve distributed, nonlinear relationships encoded by a large number of real-valued parameters. To address this limitation, we have been developing algorithms for extracting \"symbolic\" concept representations from trained neural networks. We first discuss why it is important to be able to understand the concept representations formed by neural networks. We then briefly describe our approach and discuss a number of issues pertaining to comprehensibility that have arisen in our work. Finally, we discuss choices that we have made in our research to date, and open research issues that we have not yet addressed. ",
+ "neighbors": [
+ 602
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 766,
+ "label": 1,
+ "text": "Title: Towards Automatic Discovery of Building Blocks in Genetic Programming \nAbstract: This paper presents an algorithm for the discovery of building blocks in genetic programming (GP) called adaptive representation through learning (ARL). The central idea of ARL is the adaptation of the problem representation, by extending the set of terminals and functions with a set of evolvable subroutines. The set of subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL supports subroutine creation and deletion. Subroutine creation or discovery is performed automatically based on the differential parent-offspring fitness and block activation. Subroutine deletion relies on a utility measure similar to schema fitness over a window of past generations. The technique described is tested on the problem of controlling an agent in a dynamic and non-deterministic environment. The automatic discovery of subroutines can help scale up the GP technique to complex problems. ",
+ "neighbors": [
+ 91,
+ 664,
+ 667,
+ 1145
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 767,
+ "label": 6,
+ "text": "Title: Exact Identification of Read-once Formulas Using Fixed Points of Amplification Functions \nAbstract: In this paper we describe a new technique for exactly identifying certain classes of read-once Boolean formulas. The method is based on sampling the input-output behavior of the target formula on a probability distribution that is determined by the fixed point of the formula's amplification function (defined as the probability that a 1 is output by the formula when each input bit is 1 independently with probability p). By performing various statistical tests on easily sampled variants of the fixed-point distribution, we are able to efficiently infer all structural information about any logarithmic-depth formula (with high probability). We apply our results to prove the existence of short universal identification sequences for large classes of formulas. We also describe extensions of our algorithms to handle high rates of noise, and to learn formulas of unbounded depth in Valiant's model with respect to specific distributions. Most of this research was carried out while all three authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. R. Schapire received additional support from AFOSR Grant 89-0506 while at Harvard University. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108. ",
+ "neighbors": [
+ 392,
+ 754,
+ 1141,
+ 1333
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 768,
+ "label": 2,
+ "text": "Title: ``Learning Local Error Bars for Nonlinear Regression.'' Learning Local Error Bars for Nonlinear Regression \nAbstract: We present a new method for obtaining local error bars for nonlinear regression, i.e., estimates of the confidence in predicted values that depend on the input. We approach this problem by applying a maximum-likelihood framework to an assumed distribution of errors. We demonstrate our method first on computer-generated data with locally varying, normally distributed target noise. We then apply it to laser data from the Santa Fe Time Series Competition where the underlying system noise is known quantization error and the error bars give local estimates of model misspecification. In both cases, the method also provides a weighted-regression effect that improves generalization performance. ",
+ "neighbors": [
+ 1170,
+ 1226,
+ 1227,
+ 1241,
+ 1242,
+ 1284
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 769,
+ "label": 0,
+ "text": "Title: Learning to Refine Indexing by Introspective Reasoning \nAbstract: A significant problem for case-based reasoning (CBR) systems is deciding what features to use in judging case similarity for retrieval. We describe research that addresses the feature selection problem by using introspective reasoning to learn new features for indexing. Our method augments the CBR system with an introspective reasoning component which monitors system performance to detect poor retrievals, identifies features which would lead to retrieving cases requiring less adaptation, and refines the indices to include such features in order to avoid similar future failures. We explore the benefit of introspective reasoning by performing empirical tests on the implemented system. These tests examine the benefits of introspective index refinement and the effects of problem order on case and index learning, and show that introspective learning of new index features improves overall performance across the range of different problem orders.",
+ "neighbors": [
+ 474
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 770,
+ "label": 0,
+ "text": "Title: Structure oriented case retrieval \nAbstract: ",
+ "neighbors": [
+ 256,
+ 566
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 771,
+ "label": 5,
+ "text": "Title: From Theory Refinement to KB Maintenance: a Position Statement \nAbstract: Since we consider theory refinement (TR) as a possible key concept for a methodologically clear view of knowledge-base maintenance, we try to give a structured overview about the actual state-of-the-art in TR. This overview is arranged along the description of TR as a search problem. We explain the basic approach, show the variety of existing systems and try to give some hints about the direction future research should go. ",
+ "neighbors": [
+ 72,
+ 624
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 772,
+ "label": 3,
+ "text": "Title: MCMC CONVERGENCE DIAGNOSTIC VIA THE CENTRAL LIMIT THEOREM \nAbstract: Markov Chain Monte Carlo (MCMC) methods, as introduced by Gelfand and Smith (1990), provide a simulation based strategy for statistical inference. The application fields related to these methods, as well as theoretical convergence properties, have been intensively studied in the recent literature. However, many improvements are still expected to provide workable and theoretically well-grounded solutions to the problem of monitoring the convergence of actual outputs from MCMC algorithms (i.e. the convergence assessment problem). In this paper, we introduce and discuss a methodology based on the Central Limit Theorem for Markov chains to assess convergence of MCMC algorithms. Instead of searching for approximate stationarity, we primarily intend to control the precision of estimates of the invariant probability measure, or of integrals of functions with respect to this measure, through confidence regions based on normal approximation. The first proposed control method tests the normality hypothesis for normalized averages of functions of the Markov chain over independent parallel chains. This normality control provides good guarantees that the whole state space has been explored, even in multimodal situations. It can lead to automated stopping rules. A second tool connected with the normality control is based on graphical monitoring of the stabilization of the variance after n iterations near the limiting variance appearing in the CLT. Both methods require no knowledge of the sampler driving the chain. In this paper, we mainly focus on finite state Markov chains, since this setting allows us to derive consistent estimates of both the limiting variance and the variance after n iterations. Heuristic procedures based on Berry-Esseen bounds are investigated. An extension to the continuous case is also proposed. Numerical simulations illustrating the performance of these methods are given for several examples: a finite chain with multimodal invariant probability, a finite state random walk for which the theoretical rate of convergence to stationarity is known, and a continuous state chain with multimodal invariant probability issued from a Gibbs sampler. ",
+ "neighbors": [
+ 202,
+ 520
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 773,
+ "label": 2,
+ "text": "Title: A simple algorithm that discovers efficient perceptual codes \nAbstract: We describe the \"wake-sleep\" algorithm that allows a multilayer, unsupervised, neural network to build a hierarchy of representations of sensory input. The network has bottom-up \"recognition\" connections that are used to convert sensory input into underlying representations. Unlike most artificial neural networks, it also has top-down \"generative\" connections that can be used to reconstruct the sensory input from the representations. In the \"wake\" phase of the learning algorithm, the network is driven by the bottom-up recognition connections and the top-down generative connections are trained to be better at reconstructing the sensory input from the representation chosen by the recognition process. In the \"sleep\" phase, the network is driven top-down by the generative connections to produce a fantasized representation and a fantasized sensory input. The recognition connections are then trained to be better at recovering the fantasized representation from the fantasized sensory input. In both phases, the synaptic learning rule is simple and local. The combined effect of the two phases is to create representations of the sensory input that are efficient in the following sense: On average, it takes more bits to describe each sensory input vector directly than to first describe the representation of the sensory input chosen by the recognition process and then describe the difference between the sensory input and its reconstruction from the chosen representation.",
+ "neighbors": [
+ 504
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 774,
+ "label": 4,
+ "text": "Title: Near-Optimal Performance for Reinforcement Learning in Polynomial Time \nAbstract: We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off. These are the first results in the reinforcement learning literature giving algorithms that provably converge to near-optimal performance in polynomial time for general Markov decision processes. ",
+ "neighbors": [
+ 178,
+ 327,
+ 426,
+ 954
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 775,
+ "label": 0,
+ "text": "Title: The case for cases: a call for purity in case-based reasoning inherently more difficult than\nAbstract: A basic premise of case-based reasoning (CBR) is that it involves reasoning from cases, which are representations of real episodes, rather than from rules, which are facts and if then structures with no stated connection to any real episodes. In fact, most CBR systems do not reason directly from cases | rather they reason from abstractions or simplifications of cases. In this paper, we argue for \"pure\" case-based reasoning, i.e., reasoning from representations that are both concrete and reasonably complete. We claim that working from representations that satisfy these criteria We illustrate our argument with examples from three previous systems, chef, swale, and hypo, as well as from cookie, a CBR system being developed by the first author.",
+ "neighbors": [
+ 166,
+ 182,
+ 915
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 776,
+ "label": 4,
+ "text": "Title: Generalization in Reinforcement Learning: Safely Approximating the Value Function \nAbstract: To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization.",
+ "neighbors": [
+ 9,
+ 45,
+ 95,
+ 136,
+ 322,
+ 327,
+ 334,
+ 511,
+ 800,
+ 859,
+ 1269
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 777,
+ "label": 1,
+ "text": "Title: Evaluating Evolutionary Algorithms \nAbstract: Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing search algorithms is difficult without clearly defined evaluation goals. In this paper we discuss some basic principles that can be used to develop test suites and we examine the role of test suites as they have been used to evaluate evolutionary search algorithms. Current test suites include functions that are easily solved by simple search methods such as greedy hill-climbers. Some test functions also have undesirable characteristics that are exaggerated as the dimensionality of the search space is increased. New methods are examined for constructing functions with different degrees of nonlinearity, where the interactions and the cost of evaluation scale with respect to the dimensionality of the search space.",
+ "neighbors": [
+ 91,
+ 462,
+ 902,
+ 950
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 778,
+ "label": 3,
+ "text": "Title: A Context-Sensitive Generalization of ICA \nAbstract: Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources through an unknown n fi n mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA. ",
+ "neighbors": [
+ 331,
+ 335,
+ 848
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 779,
+ "label": 2,
+ "text": "Title: Data-defined Problems and Multiversion Neural-net Systems \nAbstract: We inv estigate the applicability of an adaptive neural network to problems with time-dependent input by demonstrating that a deterministic parser for natural language inputs of significant syntactic complexity can be developed using recurrent connectionist architectures. The traditional stacking mechanism, known to be necessary for proper treatment of context-free languages in symbolic systems, is absent from the design, having been subsumed by recurrency in the network.",
+ "neighbors": [
+ 82
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 780,
+ "label": 6,
+ "text": "Title: New Evidence Driven State Merging Algorithm \nAbstract: Results of the Abbadingo One DFA Learning Competition Abstract This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale wellboth to larger DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. A second winning algorithm, of Hugues and",
+ "neighbors": [
+ 392,
+ 948,
+ 1220
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 781,
+ "label": 2,
+ "text": "Title: Cortical activity flips among quasi stationary states \nAbstract: M. Abeles, H. Bergman and E. Vaadia, School of Medicine and Center for Neural Computation Hebrew University, POB 12272, Jerusalem 91120, Is-rael. E. Seidemann and I. Meilijson, School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, and School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel. I. Gat and N. Tishby, Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel. ",
+ "neighbors": [
+ 697
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 782,
+ "label": 2,
+ "text": "Title: Support Vector Machines: Training and Applications \nAbstract: The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Laboratories [3, 6, 8, 24]. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. The main idea behind the technique is to separate the classes with a surface that maximizes the margin between them. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle [23]. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Since Structural Risk Minimization is an inductive principle that aims at minimizing a bound on the generalization error of a model, rather than minimizing the Mean Square Error over the data set (as Empirical Risk Minimization methods do), training a SVM to obtain the maximum margin classifier requires a different objective function. This objective function is then optimized by solving a large-scale quadratic programming problem with linear and box constraints. The problem is considered challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results in Frontal Human Face Detection in images. This application opens many interesting questions and future research opportunities, both in the context of faster and better optimization algorithms, and in the use of SVM's in other pattern classification, recognition, and detection applications. This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by MURI grant N00014-95-1-0600; by a grant from ONR/ARPA under contract N00014-92-J-1879 and by the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Edgar Osuna was supported by Fundacion Gran Mariscal de Ayacucho and Daimler Benz. Additional support is provided by Daimler-Benz, Eastman Kodak Company, Siemens Corporate Research, Inc. and AT&T. ",
+ "neighbors": [
+ 477,
+ 613
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 783,
+ "label": 3,
+ "text": "Title: Probabilistic Independence Networks for Hidden Markov Probability Models \nAbstract: Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach.",
+ "neighbors": [
+ 525,
+ 560,
+ 791,
+ 799,
+ 835
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 784,
+ "label": 1,
+ "text": "Title: Learning Where To Go without Knowing Where That Is: The Acquisition of a Non-reactive Mobot\nAbstract: In the path-imitation task, one agent traces out a path through a second agent's sensory field. The second agent then has to reproduce that path exactly, i.e. move through the sequence of locations visited by the first agent. This is a non-trivial behaviour whose acquisition might be expected to involve special-purpose (i.e., strongly biased) learning machinery. However, the present paper shows this is not the case. The behaviour can be acquired using a fairly primitive learning regime provided that the agent's environment can be made to pass through a specific sequence of dynamic states.",
+ "neighbors": [
+ 860
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 785,
+ "label": 6,
+ "text": "Title: Towards Robust Model Selection using Estimation and Approximation Error Bounds \nAbstract: Let us present briefly the learning problem we will address in this chapter and the following. The ultimate goal is the modelling of a mapping f : x 7! y from multidimensional input x to output y. The output can be multi-dimensional, but we will mostly address situations where it is a one dimensional real value. Furthermore, we should take into account the fact that we scarcely ever observe the actual true mapping y = f (x). This is due to perturbations such as e.g. observational noise. We will rather have a joint probability p (x; y). We expect this probability to be peaked for values of x and y corresponding to the mapping. We focus on automatic learning by example. A set D = of data sampled from the joint distribution p (x; y) = p (yjx) p (x) is collected. With the help of this set, we try to identify a model of the data, parameterised by a set of 1.2 Learning and optimisation The fit of the model to the system in a given point x is measured using a criterion representing the distance from the model prediction b y to the system, e (y; f w (x)). This is the local risk . The performance of the model is measured by the expected This quantity represents the ability to yield good performance for all the possible situations (i.e. (x; y) pairs) and is thus called generalisation error . The optimal set 1 parameters w: f w : x 7! b y.",
+ "neighbors": [
+ 493,
+ 556
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 786,
+ "label": 1,
+ "text": "Title: A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specified Tasks \nAbstract: Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper. ",
+ "neighbors": [
+ 91,
+ 123,
+ 437,
+ 646
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 787,
+ "label": 0,
+ "text": "Title: ABSTRACTION CONSIDERED HARMFUL: LAZY LEARNING OF LANGUAGE PROCESSING \nAbstract: When m = 0 (no delays), we set A 0 (ffi) = f(j; k) ; j 6= kg, such that P m (*jffi) depends only on *. The estimated probabilities above become quite noisy when the number of elements in set A m and B m are small. For this reason, we estimate the standard deviation of P m (*jffi). Notice that this estimate is the empirical average of a binomial variable (either a given couple satisfied the conditions on ffi and *, or it does not). The standard deviation is then estimated easily by: Generally speaking, P m (*jffi) increases with * (laxer output test), and when ffi approaches 0 (stricter input condition). Let us now define by P m (*) the maximum over ffi of P m (*jffi): P m (*) = max ffi>0 P m (*jffi). The dependability index is defined as: P 0 (*) represents how much data passes the continuity test when no input information is available. This dependability index measures how much of the remaining continuity information is associated with involving input i m . This index is then averaged over * with respect to the probability (1 P 0 (*)): m (*) (1 P 0 (*)) d* (4.8) It is clear that m (*), and therefore its average, should be positive quantities. Furthermore, if the system is deterministic, the dependability is zero after a certain number of inputs, so the sum of averages saturates. If the system is also noise-free, they sum up to 1. For any m greater than the embedding dimension: refers to results obtained using this method. 4.6 Statistical variable selection Statistical variable selection (or feature selection) encompasses a number of techniques aimed at choosing a relevant subset of input variables in a regression or a classification problem. As in the rest of this document, we will limit ourselves to considerations related to the regression problem, even though most methods discussed below apply to classification as well. Variable selection can be seen as a part of the data analysis problem: the selection (or discard) of a variable tells us about the relevance of the associated measurement to the modelled system. In a general setting, this is a purely combinatorial problem: given V possible variables, there is 2 V possible subsets (including the empty set and the full set) of these variables. Given a performance measure, such as prediction error, the only optimal scheme is to test all these subset and choose the one that gives the best performance. It is easy to see that such an extensive scheme is only viable when the number of variables is rather low. Identifying 2 V models when we have more than a few variables requires too much computation. A number of techniques have been devised to overcome this combinatorial limit. Some of them use an iterative, locally optimal technique to construct an estimate of the relevant subset in a number of steps. We will refer to them as stepwise selection methods, not to be con fused with stepwise regression, a subset of these methods that we will address below. In forward selection, we start with an empty set of variables. At each step, we select a candidate variable using a selection criteria, check whether this variable should be added to the set, and iterate until a given stop condition is reached. On the contrary, backward elimination methods start with the full set of all input variables. At each step, the least significant variable is selected according to a selection criteria. If this variable is irrelevant, it is removed and the process is iterated until a stop condition is reached. It is easy to devise examples where the inclusion of a variable causes a previously included variable to become irrelevant. It thus seems appropriate to consider running a backward elimination each time a new variable is added by forward selection. This combination of both ap proaches is known as stepwise regression in the linear regression con",
+ "neighbors": [
+ 456,
+ 653,
+ 990
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 788,
+ "label": 1,
+ "text": "Title: Evolution of Mapmaking: Learning, planning, and memory using Genetic Programming \nAbstract: An essential component of an intelligent agent is the ability to observe, encode, and use information about its environment. Traditional approaches to Genetic Programming have focused on evolving functional or reactive programs with only a minimal use of state. This paper presents an approach for investigating the evolution of learning, planning, and memory using Genetic Programming. The approach uses a multi-phasic fitness environment that enforces the use of memory and allows fairly straightforward comprehension of the evolved representations . An illustrative problem of 'gold' collection is used to demonstrate the usefulness of the approach. The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. ",
+ "neighbors": [
+ 70,
+ 459,
+ 1161,
+ 1312
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 789,
+ "label": 2,
+ "text": "Title: Connection Pruning with Static and Adaptive Pruning Schedules \nAbstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (e.g. OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required.",
+ "neighbors": [
+ 510,
+ 674,
+ 1155
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 790,
+ "label": 0,
+ "text": "Title: References Automatic student modeling and bug library construction using theory refinement. Ph.D. ml/ Symbolic revision\nAbstract: ASSERT demonstrates how theory refinement techniques developed in machine learning can be used to ef fec-tively build student models for intelligent tutoring systems. This application is unique since it inverts the normal goal of theory refinement from correcting errors in a knowledge base to introducing them. A comprehensive experiment involving a lar ge number of students interacting with an automated tutor for teaching concepts in C ++ programming was used to evaluate the approach. This experiment demonstrated the ability of theory refinement to generate more accurate student models than raw induction, as well as the ability of the resulting models to support individualized feedback that actually improves students subsequent performance. Carr, B. and Goldstein, I. (1977). Overlays: a theory of modeling for computer aided instruction. T echnical Report A. I. Memo 406, Cambridge, MA: MIT. Sandberg, J. and Barnard, Y . (1993). Education and technology: What do we know? And where is AI? Artificial Intelligence Communications, 6(1):47-58. ",
+ "neighbors": [
+ 72,
+ 624
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 791,
+ "label": 3,
+ "text": "Title: Tractable Inference for Complex Stochastic Processes \nAbstract: The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief statea probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.",
+ "neighbors": [
+ 458,
+ 546,
+ 711,
+ 722,
+ 723,
+ 783
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 792,
+ "label": 2,
+ "text": "Title: Generating Accurate and Diverse Members of a Neural-Network Ensemble \nAbstract: Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called Addemup that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that Addemup is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.",
+ "neighbors": [
+ 317,
+ 480,
+ 686,
+ 696,
+ 714,
+ 925
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 793,
+ "label": 2,
+ "text": "Title: Comparison of Regression Methods, Symbolic Induction Methods and Neural Networks in Morbidity Diagnosis and Mortality\nAbstract: Classifier induction algorithms differ on what inductive hypotheses they can represent, and on how they search their space of hypotheses. No classifier is better than another for all problems: they have selective superiority. This paper empirically compares six classifier induction algorithms on the diagnosis of equine colic and the prediction of its mortality. The classification is based on simultaneously analyzing sixteen features measured from a patient. The relative merits of the algorithms (linear regression, decision trees, nearest neighbor classifiers, the Model Class Selection system, logistic regression (with and without feature selection), and neural nets) are qualitatively discussed, and the generalization accuracies quantitatively analyzed. ",
+ "neighbors": [
+ 743,
+ 1305
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 794,
+ "label": 1,
+ "text": "Title: Multi-parent's niche: n-ary crossovers on NK-landscapes \nAbstract: Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems.",
+ "neighbors": [
+ 683,
+ 851,
+ 986
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 795,
+ "label": 3,
+ "text": "Title: SINGLE FACTOR ANALYSIS BY MML ESTIMATION \nAbstract: The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likelihood (ML) analysis. Unlike ML, the MML estimators remain consistent when used to estimate both the factor loadings and factor scores. Tests on simulated data show the MML estimates to be on av erage more accurate than the ML estimates when the former exist. If the data show little evidence for a factor, the MML estimate collapses. It is shown that the condition for the existence of an MML estimate is essentially that the log likelihood ratio in favour of the factor model exceed the value expected under the null (no-factor) hypotheses. ",
+ "neighbors": [
+ 302
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 796,
+ "label": 5,
+ "text": "Title: Inverse entailment and Progol \nAbstract: This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses. ",
+ "neighbors": [
+ 735,
+ 894,
+ 907,
+ 1168,
+ 1204,
+ 1320
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 797,
+ "label": 2,
+ "text": "Title: Learning to Represent Codons: A Challenge Problem for Constructive Induction \nAbstract: The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. Systems that perform constructive induction are able to change their representation by constructing new features. We describe an important, real-world problem finding genes in DNA that we believe offers an interesting challenge to constructive-induction researchers. We report experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for constructive induction because the relationship between the two representations is well known, and because the representational shift involved in construct ing the better representation is not imposing.",
+ "neighbors": [
+ 405
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 798,
+ "label": 2,
+ "text": "Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming \nAbstract: Further Results on Controllability Abstract This paper studies controllability properties of recurrent neural networks. The new contributions are: (1) an extension of a previous result to a slightly different model, (2) a formulation and proof of a necessary and sufficient condition, and (3) an analysis of a low-dimensional case for which the of Recurrent Neural Networks fl",
+ "neighbors": [
+ 588,
+ 596,
+ 597
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 799,
+ "label": 3,
+ "text": "Title: Coupled hidden Markov models for complex action recognition \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 407 20nov96 Abstract We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm, and a clear Bayesian semantics. However, the Markovian framework makes strong restrictive assumptions about the system generating the signalthat it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions. ",
+ "neighbors": [
+ 457,
+ 722,
+ 783,
+ 891
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 800,
+ "label": 4,
+ "text": "Title: Value Function Approximations and Job-Shop Scheduling \nAbstract: We report a successful application of TD() with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses. ",
+ "neighbors": [
+ 45,
+ 327,
+ 776
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 801,
+ "label": 5,
+ "text": "Title: Learning Goal-Decomposition Rules using Exercises \nAbstract: Exercises are problems ordered in increasing order of difficulty. Teaching problem-solving through exercises is a widely used pedagogic technique. A computational reason for this is that the knowledge gained by solving simple problems is useful in efficiently solving more difficult problems. We adopt this approach of learning from exercises to acquire search-control knowledge in the form of goal-decomposition rules (d-rules). D-rules are first order, and are learned using a new \"generalize-and-test\" algorithm which is based on inductive logic programming techniques. We demonstrate the feasibility of the approach by applying it in two planning do mains.",
+ "neighbors": [
+ 198,
+ 393,
+ 802
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 802,
+ "label": 5,
+ "text": "Title: Theory-guided Empirical Speedup Learning of Goal Decomposition Rules \nAbstract: Speedup learning is the study of improving the problem-solving performance with experience and from outside guidance. We describe here a system that successfully combines the best features of Explanation-based learning and empirical learning to learn goal decomposition rules from examples of successful problem solving and membership queries. We demonstrate that our system can efficiently learn effective decomposition rules in three different domains. Our results suggest that theory-guided empirical learning can overcome the problems of purely explanation-based learning and purely empirical learning, and be an effective speedup learning method.",
+ "neighbors": [
+ 198,
+ 393,
+ 801
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 803,
+ "label": 2,
+ "text": "Title: STABILIZATION WITH SATURATED ACTUATORS, A WORKED EXAMPLE:F-8 LONGITUDINAL FLIGHT CONTROL \nAbstract: The authors and coworkers recently proved general theorems on the global stabilization of linear systems subject to control saturation. This paper develops in detail an explicit design for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the obtained controller on the original nonlinear model. This paper represents the first detailed derivation of a controller using the techniques in question, and the results are very encouraging. ",
+ "neighbors": [
+ 549,
+ 719
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 804,
+ "label": 6,
+ "text": "Title: Adaptive Mixtures of Probabilistic Transducers \nAbstract: We describe and analyze a mixture model for supervised learning of probabilistic transducers. We devise an on-line learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. We also present an application of the model for inducing a noun phrase recognizer.",
+ "neighbors": [
+ 255,
+ 586
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 805,
+ "label": 2,
+ "text": "Title: On the Computation of the Induced L 2 Norm of Single Input Linear Systems with Saturation \nAbstract: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.",
+ "neighbors": [
+ 713,
+ 756,
+ 896
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 806,
+ "label": 0,
+ "text": "Title: Automatic Indexing, Retrieval and Reuse of Topologies in Architectual Layouts \nAbstract: Former layouts contain much of the know-how of architects. A generic and automatic way to formalize this know-how in order to use it by a computer would save a lot of effort and money. However, there seems to be no such way. The only access to the know-how are the layouts themselves. Developing a generic software tool to reuse former layouts you cannot consider every part of the architectual domain or things like personal style. Tools used today only consider small parts of the architectual domain. Any personal style is ignored. Isn't it possible to build a basic tool which is adjusted by the content of the former layouts, but may be extended incremently by modeling as much of the domain as desirable? This paper will describe a reuse tool to perform this task focusing on topological and geometrical binary relations.",
+ "neighbors": [
+ 309,
+ 678
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 807,
+ "label": 1,
+ "text": "Title: Self-Adaptation in Genetic Algorithms of external parameters of a GA is seen as a first\nAbstract: In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment-dependent self-adaptation of appropriate settings for the mutation rate is possible even for GAs. ",
+ "neighbors": [
+ 462,
+ 610,
+ 652,
+ 937
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 808,
+ "label": 6,
+ "text": "Title: An Interactive Model of Teaching \nAbstract: Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin [1]. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas. ",
+ "neighbors": [
+ 179,
+ 621,
+ 754
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 809,
+ "label": 2,
+ "text": "Title: Actively Searching for an Effective Neural-Network Ensemble \nAbstract: A neural-network ensemble is a very successful technique where the outputs of a set of separately trained neural network are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that Addemup is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble. ",
+ "neighbors": [
+ 91,
+ 330,
+ 480,
+ 696,
+ 814
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 810,
+ "label": 3,
+ "text": "Title: Sequential Thresholds: Context Sensitive Default Extensions \nAbstract: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 811,
+ "label": 4,
+ "text": "Title: Generalized Markov Decision Processes: Dynamic-programming and Reinforcement-learning Algorithms \nAbstract: The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence. ",
+ "neighbors": [
+ 30,
+ 92,
+ 120,
+ 328,
+ 370,
+ 391,
+ 434,
+ 859,
+ 939,
+ 1162
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 812,
+ "label": 6,
+ "text": "Title: Learning from Queries and Examples with Tree-structured Bias \nAbstract: Incorporating declarative bias or prior knowledge into learning is an active research topic in machine learning. Tree-structured bias specifies the prior knowledge as a tree of \"relevance\" relationships between attributes. This paper presents a learning algorithm that implements tree-structured bias, i.e., learns any target function probably approximately correctly from random examples and membership queries if it obeys a given tree-structured bias. The theoretical predictions of the paper are em pirically validated.",
+ "neighbors": [
+ 374,
+ 392,
+ 537
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 813,
+ "label": 2,
+ "text": "Title: Learning in Boltzmann Trees \nAbstract: We introduce a large family of Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N -bit parity and the detection of hidden symmetries.",
+ "neighbors": [
+ 176,
+ 723,
+ 891
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 814,
+ "label": 2,
+ "text": "Title: Learning from Bad Data \nAbstract: The data describing resolutions to telephone network local loop \"troubles,\" from which we wish to learn rules for dispatching technicians, are notoriously unreliable. Anecdotes abound detailing reasons why a resolution entered by a technician would not be valid, ranging from sympathy to fear to ignorance to negligence to management pressure. In this paper, we describe four different approaches to dealing with the problem of \"bad\" data in order first to determine whether machine learning has promise in this domain, and then to determine how well machine learning might perform. We then offer evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place.",
+ "neighbors": [
+ 809,
+ 925
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 815,
+ "label": 2,
+ "text": "Title: LINEAR SYSTEMS WITH SIGN-OBSERVATIONS \nAbstract: This paper deals with systems that are obtained from linear time-invariant continuous-or discrete-time devices followed by a function that just provides the sign of each output. Such systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner. ",
+ "neighbors": [
+ 112,
+ 584,
+ 623,
+ 705,
+ 819
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 816,
+ "label": 1,
+ "text": "Title: A STUDY OF CROSSOVER OPERATORS IN GENETIC PROGRAMMING \nAbstract: Holland's analysis of the sources of power of genetic algorithms has served as guidance for the applications of genetic algorithms for more than 15 years. The technique of applying a recombination operator (crossover) to a population of individuals is a key to that power. Neverless, there have been a number of contradictory results concerning crossover operators with respect to overall performance. Recently, for example, genetic algorithms were used to design neural network modules and their control circuits. In these studies, a genetic algorithm without crossover outperformed a genetic algorithm with crossover. This report re-examines these studies, and concludes that the results were caused by a small population size. New results are presented that illustrate the effectiveness of crossover when the population size is larger. From a performance view, the results indicate that better neural networks can be evolved in a shorter time if the genetic algorithm uses crossover. ",
+ "neighbors": [
+ 420,
+ 545,
+ 579
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 817,
+ "label": 1,
+ "text": "Title: Adaptive Strategy Selection for Concept Learning \nAbstract: In this paper, we explore the use of genetic algorithms (GAs) to construct a system called GABIL that continually learns and refines concept classification rules from its interac - tion with the environment. The performance of this system is compared with that of two other concept learners (NEWGEM and C4.5) on a suite of target concepts. From this comparison, we identify strategies responsible for the success of these concept learners. We then implement a subset of these strategies within GABIL to produce a multistrategy concept learner. Finally, this multistrategy concept learner is further enhanced by allowing the GAs to adaptively select the appropriate strategies. ",
+ "neighbors": [
+ 91,
+ 462,
+ 745
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 818,
+ "label": 6,
+ "text": "Title: Preventing \"Overfitting\" of Cross-Validation Data \nAbstract: Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then \"folklore\" also warns about \"overfitting\" the cross-validation data [Klockars and Sax, 1986, Tukey, 1949, Tukey, 1953]. In this paper, we explain how this \"overfitting\" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa-tionally efficient form of leave-one-out cross-validation to select such a hypothesis. Finally, we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets. ",
+ "neighbors": [
+ 374,
+ 492,
+ 493
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 819,
+ "label": 2,
+ "text": "Title: Interconnected Automata and Linear Systems: A Theoretical Framework in Discrete-Time In Hybrid Systems III: Verification\nAbstract: This paper summarizes the definitions and several of the main results of an approach to hybrid systems, which combines finite automata and linear systems, developed by the author in the early 1980s. Some related more recent results are briefly mentioned as well. ",
+ "neighbors": [
+ 233,
+ 815
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 820,
+ "label": 2,
+ "text": "Title: New Characterizations of Input to State Stability \nAbstract: We present new characterizations of the Input to State Stability property. As a consequence of these results, we show the equivalence between the ISS property and several (apparent) variations proposed in the literature. ",
+ "neighbors": [
+ 251,
+ 403,
+ 719
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 821,
+ "label": 1,
+ "text": "Title: Evolving Non-Determinism: An Inventive and Efficient Tool for Optimization and Discovery of Strategies \nAbstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model proposes an inventive way to explore the space of states that, combined with the use of simulated co-evolution, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from scratch some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem! For the Solitaire game, END evolved a strategy comparable to a human designed strategy. ",
+ "neighbors": [
+ 218,
+ 702,
+ 822
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 822,
+ "label": 1,
+ "text": "Title: Incremental Co-evolution of Organisms: A New Approach for Optimization and Discovery of Strategies \nAbstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model presented in this paper proposes an inventive way to explore the space of states that, using the simulated \"incremental\" co-evolution of some organisms, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from \"scratch\" some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem. For the Solitaire game, END evolved a strategy comparable to a human designed strategy. ",
+ "neighbors": [
+ 218,
+ 702,
+ 821,
+ 955
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 823,
+ "label": 3,
+ "text": "Title: Revising Bayesian Network Parameters Using Backpropagation \nAbstract: The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique. ",
+ "neighbors": [
+ 72,
+ 624,
+ 1290
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 824,
+ "label": 1,
+ "text": "Title: An Evolutionary Approach to Learning in Robots \nAbstract: Evolutionary learning methods have been found to be useful in several areas in the development of intelligent robots. In the approach described here, evolutionary algorithms are used to explore alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. This paper presents some initial results of applying the SAMUEL genetic learning system to a collision avoidance and navigation task for mobile robots.",
+ "neighbors": [
+ 529,
+ 553,
+ 554,
+ 555,
+ 878
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 825,
+ "label": 0,
+ "text": "Title: Context-Based Similarity Applied to Retrieval of Relevant Cases \nAbstract: Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases. ",
+ "neighbors": [
+ 637,
+ 638,
+ 834,
+ 1093,
+ 1095
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 826,
+ "label": 6,
+ "text": "Title: Experiments with a New Boosting Algorithm \nAbstract: In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. ",
+ "neighbors": [
+ 540,
+ 602,
+ 619,
+ 684,
+ 696,
+ 846
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 827,
+ "label": 2,
+ "text": "Title: Maximizing the Robustness of a Linear Threshold Classifier with Discrete Weights \nAbstract: Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights. ",
+ "neighbors": [
+ 654,
+ 703
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 828,
+ "label": 5,
+ "text": "Title: Induction of decision trees and Bayesian classification applied to diagnosis of sport injuries \nAbstract: Machine learning techniques can be used to extract knowledge from data stored in medical databases. In our application, various machine learning algorithms were used to extract diagnostic knowledge to support the diagnosis of sport injuries. The applied methods include variants of the Assistant algorithm for top-down induction of decision trees, and variants of the Bayesian classifier. The available dataset was insufficent for reliable diagnosis of all sport injuries considered by the system. Consequently, expert-defined diagnostic rules were added and used as pre-classifiers or as generators of additional training instances for injuries with few training examples. Experimental results show that the classification accuracy and the explanation capability of the naive Bayesian classifier with the fuzzy discretization of numerical attributes was superior to other methods and was estimated as the most appro priate for practical use.",
+ "neighbors": [
+ 239,
+ 574,
+ 875
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 829,
+ "label": 2,
+ "text": "Title: A Divide-and-Conquer Approach to Learning from Prior Knowledge \nAbstract: This paper introduces a new machine learning task|model calibration|and presents a method for solving a particularly difficult model calibration task that arose as part of a global climate change research project. The model calibration task is the problem of training the free parameters of a scientific model in order to optimize the accuracy of the model for making future predictions. It is a form of supervised learning from examples in the presence of prior knowledge. An obvious approach to solving calibration problems is to formulate them as global optimization problems in which the goal is to find values for the free parameters that minimize the error of the model on training data. Unfortunately, this global optimization approach becomes computationally infeasible when the model is highly nonlinear. This paper presents a new divide-and-conquer method that analyzes the model to identify a series of smaller optimization problems whose sequential solution solves the global calibration problem. This paper argues that methods of this kind|rather than global optimization techniques|will be required in order for agents with large amounts of prior knowledge to learn efficiently. ",
+ "neighbors": [
+ 852
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 830,
+ "label": 2,
+ "text": "Title: Identification and Control of Nonlinear Systems Using Neural Network Models: Design and Stability Analysis \nAbstract: Report 91-09-01 September 1991 (revised) May 1994 ",
+ "neighbors": [
+ 116,
+ 240,
+ 357,
+ 562,
+ 831,
+ 929
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 831,
+ "label": 2,
+ "text": "Title: FEEDBACK STABILIZATION USING TWO-HIDDEN-LAYER NETS \nAbstract: This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into \"direct\" and \"inverse\" problems. The former correspond to the approximation of continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions |and are often encountered in the context of inverse kinematics determination or in control questions. A general result is given showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one. ",
+ "neighbors": [
+ 116,
+ 305,
+ 830
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 832,
+ "label": 6,
+ "text": "Title: Discovery as Autonomous Learning from the Environment \nAbstract: Discovery involves collaboration among many intelligent activities. However, little is known about how and in what form such collaboration occurs. In this paper, a framework is proposed for autonomous systems that learn and discover from their environment. Within this framework, many intelligent activities such as perception, action, exploration, experimentation, learning, problem solving, and new term construction can be integrated in a coherent way. The framework is presented in detail through an implemented system called LIVE, and is evaluated through the performance of LIVE on several discovery tasks. The conclusion is that autonomous learning from the environment is a feasible approach for integrating the activities involved in a discovery process.",
+ "neighbors": [
+ 494,
+ 524,
+ 897
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 833,
+ "label": 0,
+ "text": "Title: Combining Rules and Cases to Learn Case Adaptation \nAbstract: Computer models of case-based reasoning (CBR) generally guide case adaptation using a fixed set of adaptation rules. A difficult practical problem is how to identify the knowledge required to guide adaptation for particular tasks. Likewise, an open issue for CBR as a cognitive model is how case adaptation knowledge is learned. We describe a new approach to acquiring case adaptation knowledge. In this approach, adaptation problems are initially solved by reasoning from scratch, using abstract rules about structural transformations and general memory search heuristics. Traces of the processing used for successful rule-based adaptation are stored as cases to enable future adaptation to be done by case-based reasoning. When similar adaptation problems are encountered in the future, these adaptation cases provide task- and domain-specific guidance for the case adaptation process. We present the tenets of the approach concerning the relationship between memory search and case adaptation, the memory search process, and the storage and reuse of cases representing adaptation episodes. These points are discussed in the context of ongoing research on DIAL, a computer model that learns case adaptation knowledge for case-based disaster response planning. ",
+ "neighbors": [
+ 337,
+ 639,
+ 679
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 834,
+ "label": 0,
+ "text": "Title: INFERENTIAL THEORY OF LEARNING: Developing Foundations for Multistrategy Learning \nAbstract: The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization. ",
+ "neighbors": [
+ 91,
+ 167,
+ 339,
+ 825,
+ 854,
+ 1238
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 835,
+ "label": 3,
+ "text": "Title: Belief Networks, Hidden Markov Models, and Markov Random Fields: a Unifying View \nAbstract: The use of graphs to represent independence structure in multivariate probability models has been pursued in a relatively independent fashion across a wide variety of research disciplines since the beginning of this century. This paper provides a brief overview of the current status of such research with particular attention to recent developments which have served to unify such seemingly disparate topics as probabilistic expert systems, statistical physics, image analysis, genetics, decoding of error-correcting codes, Kalman filters, and speech recognition with Markov models.",
+ "neighbors": [
+ 336,
+ 448,
+ 723,
+ 783
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 836,
+ "label": 3,
+ "text": "Title: Belief Revision in Probability Theory \nAbstract: In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments are properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.",
+ "neighbors": [
+ 628,
+ 837,
+ 839,
+ 849
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 837,
+ "label": 3,
+ "text": "Title: From Inheritance Relation to Non-Axiomatic Logic \nAbstract: At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision. ",
+ "neighbors": [
+ 628,
+ 836,
+ 838,
+ 839,
+ 849
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 838,
+ "label": 3,
+ "text": "Title: Non-Axiomatic Reasoning System (Version 2.2) used to show how the system works. The limitations of\nAbstract: NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a unified format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network. ",
+ "neighbors": [
+ 628,
+ 837,
+ 839,
+ 849
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 839,
+ "label": 3,
+ "text": "Title: A Unified Treatment of Uncertainties \nAbstract: Uncertainty in artificial intelligence\" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, Non-Axiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation.",
+ "neighbors": [
+ 628,
+ 836,
+ 837,
+ 838
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 840,
+ "label": 6,
+ "text": "Title: The Problem with Noise and Small Disjuncts \nAbstract: Systems that learn from examples often create a disjunctive concept definition. The disjuncts in the concept definition which cover only a few training examples are referred to as small disjuncts. The problem with small disjuncts is that they are more error prone than large disjuncts, but may be necessary to achieve a high level of predictive accuracy [Holte, Acker, and Porter, 1989]. This paper extends previous work done on the problem of small disjuncts by taking noise into account. It investigates the assertion that it is hard to learn from noisy data because it is difficult to distinguish between noise and true exceptions. In the process of evaluating this assertion, insights are gained into the mechanisms by which noise affects learning. Two domains are investigated. The experimental results in this paper suggest that for both Shapiro's chess endgame domain [Shapiro, 1987] and for the Wisconsin breast cancer domain [Wolberg, 1990], the assertion is true, at least for low levels (5-10%) of class noise. ",
+ "neighbors": [
+ 460,
+ 694
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 841,
+ "label": 2,
+ "text": "Title: In Stable Dynamic Parameter Adaptation \nAbstract: ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 842,
+ "label": 6,
+ "text": "Title: Is Consistency Harmful? \nAbstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
+ "neighbors": [
+ 241,
+ 745
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 843,
+ "label": 4,
+ "text": "Title: Evolving Optimal Populations with XCS Classifier Systems \nAbstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
+ "neighbors": [
+ 91,
+ 883
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 844,
+ "label": 1,
+ "text": "Title: Solving 3-SAT by GAs Adapting Constraint Weights \nAbstract: Handling NP complete problems with GAs is a great challenge. In particular the presence of constraints makes finding solutions hard for a GA. In this paper we present a problem independent constraint handling mechanism, Stepwise Adaptation of Weights (SAW), and apply it for solving the 3-SAT problem. Our experiments prove that the SAW mechanism substantially increases GA performance. Furthermore, we compare our SAW-ing GA with the best heuristic technique we could trace, WGSAT, and conclude that the GA is superior to the heuristic method. ",
+ "neighbors": [
+ 482,
+ 643,
+ 683
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 845,
+ "label": 2,
+ "text": "Title: Equivariant adaptive source separation \nAbstract: Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach. ",
+ "neighbors": [
+ 32,
+ 331,
+ 483,
+ 487,
+ 505,
+ 506,
+ 848
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 846,
+ "label": 6,
+ "text": "Title: Improving Bagging Performance by Increasing Decision Tree Diversity \nAbstract: ARCING THE EDGE Leo Breiman Technical Report 486 , Statistics Department University of California, Berkeley CA. 94720 Abstract Recent work has shown that adaptively reweighting the training set, growing a classifier using the new weights, and combining the classifiers constructed to date can significantly decrease generalization error. Procedures of this type were called arcing by Breiman[1996]. The first successful arcing procedure was introduced by Freund and Schapire[1995,1996] and called Adaboost. In an effort to explain why Adaboost works, Schapire et.al. [1997] derived a bound on the generalization error of a convex combination of classifiers in terms of the margin. We introduce a function called the edge, which differs from the margin only if there are more than two classes. A framework for understanding arcing algorithms is defined. In this framework, we see that the arcing algorithms currently in the literature are optimization algorithms which minimize some function of the edge. A relation is derived between the optimal reduction in the maximum value of the edge and the PAC concept of weak learner. Two algorithms are described which achieve the optimal reduction. Tests on both synthetic and real data cast doubt on the Schapire et.al. There is recent empirical evidence that significant reductions in generalization error can be gotten by growing a number of different classifiers on the same training set and letting these vote for the best class. Freund and Schapire ([1995], [1996] ) proposed an algorithm called AdaBoost which adaptively reweights the training set in a way based on the past history of misclassifications, constructs a new classifier using the current weights, and uses the misclassification rate of this classifier to determine the size of its vote. In a number of empirical studies on many data sets using trees (CART or C4.5) as the base classifier (Drucker and Cortes[1995], Quinlan[1996], Freud and Schapire[1996], Breiman[1996]) AdaBoost produced dramatic decreases in generalization error compared to using a single tree. Error rates were reduced to the point where tests on some well-known data sets gave the result that CART plus AdaBoost did significantly better than any other of the commonly used classification methods (Breiman[1996] ). Meanwhile, empirical results showed that other methods of adaptive resampling (or reweighting) and combining (called \"arcing\" by Breiman [1996]) also led to low test set error rates. An algorithm called arc-x4 (Breiman[1996]) gave error rates almost identical to Adaboost. Ji and Ma[1997] worked with classifiers consisting of randomly selected hyperplanes and using a different method of adaptive resampling and unweighted voting, also got low error rates. Thus, there are a least three arcing algorithms extant, all of which give excellent classification accuracy. explanation.",
+ "neighbors": [
+ 330,
+ 826
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 847,
+ "label": 1,
+ "text": "Title: A Generalized Permutation Approach to Job Shop Scheduling with Genetic Algorithms \nAbstract: In order to sequence the tasks of a job shop problem (JSP) on a number of machines related to the technological machine order of jobs, a new representation technique mathematically known as \"permutation with repetition\" is presented. The main advantage of this single chromosome representation is in analogy to the permutation scheme of the traveling salesman problem (TSP) that it cannot produce illegal sets of operation sequences (infeasible symbolic solutions). As a consequence of the representation scheme a new crossover operator preserving the initial scheme structure of permutations with repetition will be sketched. Its behavior is similar to the well known Order-Crossover for simple permutation schemes. Actually the GOX operator for permutations with repetition arises from a Generalisation of OX. Computational experiments show, that GOX passes the information from a couple of parent solutions efficiently to offspring solutions. Together, the new representation and GOX support the cooperative aspect of the genetic search for scheduling problems strongly. ",
+ "neighbors": [
+ 197,
+ 603,
+ 643
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 848,
+ "label": 2,
+ "text": "Title: BLIND SEPARATION OF DELAYED SOURCES BASED ON INFORMATION MAXIMIZATION \nAbstract: Recently, Bell and Sejnowski have presented an approach to blind source separation based on the information maximization principle. We extend this approach into more general cases where the sources may have been delayed with respect to each other. We present a network architecture capable of coping with such sources, and we derive the adaptation equations for the delays and the weights in the network by maximizing the information transferred through the network. Examples using wideband sources such as speech are presented to illustrate the algorithm. ",
+ "neighbors": [
+ 331,
+ 335,
+ 700,
+ 778,
+ 845
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 849,
+ "label": 3,
+ "text": "Title: Reference Classes and Multiple Inheritances \nAbstract: The reference class problem in probability theory and the multiple inheritances (extensions) problem in non-monotonic logics can be referred to as special cases of conflicting beliefs. The current solution accepted in the two domains is the specificity priority principle. By analyzing an example, several factors (ignored by the principle) are found to be relevant to the priority of a reference class. A new approach, Non-Axiomatic Reasoning System (NARS), is discussed, where these factors are all taken into account. It is argued that the solution provided by NARS is better than the solutions provided by probability theory and non-monotonic logics.",
+ "neighbors": [
+ 836,
+ 837,
+ 838
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 850,
+ "label": 3,
+ "text": "Title: A THEORY OF INFERRED CAUSATION perceive causal relationships in uncon trolled observations. 2. the task\nAbstract: This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.",
+ "neighbors": [
+ 121,
+ 152,
+ 528,
+ 557,
+ 618,
+ 698,
+ 861,
+ 1027,
+ 1106,
+ 1139,
+ 1162
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 851,
+ "label": 1,
+ "text": "Title: Performance of Multi-Parent Crossover Operators on Numerical Function Optimization Problems \nAbstract: The multi-parent scanning crossover, generalizing the traditional uniform crossover, and diagonal crossover, generalizing 1-point (n-point) crossovers, were introduced in [5]. In subsequent publications, see [6, 18, 19], several aspects of multi-parent recombination are discussed. Due to space limitations, however, a full overview of experimental results showing the performance of multi-parent GAs on numerical optimization problems has never been published. This technical report is meant to fill this gap and make results available. ",
+ "neighbors": [
+ 78,
+ 91,
+ 683,
+ 794,
+ 1107
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 852,
+ "label": 3,
+ "text": "Title: Automated Decomposition of Model-based Learning Problems \nAbstract: A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of decompositional, model-based learning (DML), a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate. ",
+ "neighbors": [
+ 190,
+ 321,
+ 336,
+ 455,
+ 829
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 853,
+ "label": 1,
+ "text": "Title: Evolving Visual Routines \nAbstract: Traditional machine vision assumes that the vision system recovers a a complete, labeled description of the world [ Marr, 1982 ] . Recently, several researchers have criticized this model and proposed an alternative model which considers perception as a distributed collection of task-specific, task-driven visual routines [ Aloimonos, 1993, Ullman, 1987 ] . Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection [ Ramachandran, 1985 ] . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g. [ Chapman, 1993 ] ). In this paper we propose an alternative approach in which visual routines for simple tasks are evolved using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using Genetic Programming techniques [ Koza, 1992 ] . The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand. ",
+ "neighbors": [
+ 454,
+ 491,
+ 522,
+ 717
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 854,
+ "label": 0,
+ "text": "Title: The Use of Explicit Goals for Knowledge to Guide Inference and Learning \nAbstract: Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience. ",
+ "neighbors": [
+ 167,
+ 636,
+ 649,
+ 718,
+ 834,
+ 893
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 855,
+ "label": 0,
+ "text": "Title: Decision Models: A Theory of Volitional Explanation \nAbstract: This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations, as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanation patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used and evaluated.",
+ "neighbors": [
+ 167,
+ 368,
+ 758,
+ 857
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 856,
+ "label": 1,
+ "text": "Title: Representation and Evolution of Neural Networks \nAbstract: An evolutionary approach for developing improved neural network architectures is presented. It is shown that it is possible to use genetic algorithms for the construction of backpropagation networks for real world tasks. Therefore a network representation is developed with certain properties. Results with various application are presented. ",
+ "neighbors": [
+ 91,
+ 117,
+ 955
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 857,
+ "label": 0,
+ "text": "Title: Incremental Learning of Explanation Patterns and their Indices \nAbstract: This paper describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Recent work in AI has dealt with the issue of using past explanations stored in the reasoner's memory to understand novel situations. However, this process assumes that past explanations are well understood and provide good \"lessons\" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Instead, it is reasonable to assume that the reasoner would have gaps in its knowledge base. By reasoning about a new situation, the reasoner should be able to fill in these gaps as new information came in, reorganize its explanations in memory, and gradually evolve a better understanding of its domain. We present a story understanding program that retrieves past explanations from situations already in memory, and uses them to build explanations to understand novel stories about terrorism. In doing so, the system refines its understanding of the domain by filling in gaps in these explanations, by elaborating the explanations, or by learning new indices for the explanations. This is a type of incremental learning since the system improves its explanatory knowledge of the domain in an incremental fashion rather than by learning new XPs as a whole.",
+ "neighbors": [
+ 167,
+ 368,
+ 758,
+ 855
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 858,
+ "label": 5,
+ "text": "Title: Finding new rules for incomplete theories: Explicit biases for induction with contextual information. In Proceedings\nAbstract: addressed in KBANN (which translates a theory into a neural-net, refines it using backpropagation, and then retranslates the result back into rules) by adding extra hidden units and connections to the initial network; however, this would require predetermining the num In this paper, we have presented constructive induction techniques recently added to the EITHER theory refinement system. Intermediate concept utilization employs existing rules in the theory to derive higher-level features for use in induction. Intermediate concept creation employs inverse resolution to introduce new intermediate concepts in order to fill gaps in a theory than span multiple levels. These revisions allow EITHER to make use of imperfect domain theories in the ways typical of previous work in both constructive induction and theory refinement. As a result, EITHER is able to handle a wider range of theory imperfections than any other existing theory refinement system. ",
+ "neighbors": [
+ 72,
+ 217,
+ 252,
+ 374,
+ 537,
+ 1108
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 859,
+ "label": 4,
+ "text": "Title: MultiPlayer Residual Advantage Learning With General Function Approximation \nAbstract: A new algorithm, advantage learning, is presented that improves on advantage updating by requiring that a single function be learned rather than two. Furthermore, advantage learning requires only a single type of update, the learning update, while advantage updating requires two different types of updates, a learning update and a normilization update. The reinforcement learning system uses the residual form of advantage learning. An application of reinforcement learning to a Markov game is presented. The testbed has continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step , each player chooses one of two possible actions; turn left or turn right, resulting in a 90 degree instantaneous change in the aircraft s heading. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. Speed of learning is increased by a new algorithm , Incremental Delta-Delta (IDD), which extends Jacobs (1988) Delta-Delta for use in incremental training, and differs from Suttons Incremental Delta-Bar-Delta (1992) in that it does not require the use of a trace and is amenable for use with general function approximation systems. The advantage learning algorithm for optimal control is modified for games in order to find the minimax point, rather than the maximum. Empirical results gathered using the missile/aircraft testbed validate theory that suggests residual forms of reinforcement learning algorithms converge to a local minimum of the mean squared Bellman residual when using general function approximation systems. Also, to our knowledge, this is the first time an approximate second order method has been used with residual algorithms. Empirical results are presented comparing convergence rates with and without the use of IDD for the reinforcement learning testbed described above and for a supervised learning testbed. The results of these experiments demonstrate IDD increased the rate of convergence and resulted in an order of magnitude lower total asymptotic error than when using backpropagation alone. ",
+ "neighbors": [
+ 327,
+ 776,
+ 811
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 860,
+ "label": 1,
+ "text": "Title: Unsupervised Learning with the Soft-Means Algorithm \nAbstract: This note describes a useful adaptation of the `peak seeking' regime used in unsupervised learning processes such as competitive learning and `k-means'. The adaptation enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data. ",
+ "neighbors": [
+ 784
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 861,
+ "label": 3,
+ "text": "Title: Belief Networks Revisited \nAbstract: Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods. ",
+ "neighbors": [
+ 740,
+ 850
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 862,
+ "label": 1,
+ "text": "Title: Monitoring in Embedded Agents \nAbstract: Finding good monitoring strategies is an important process in the design of any embedded agent. We describe the nature of the monitoring problem, point out what makes it difficult, and show that while periodic monitoring strategies are often the easiest to derive, they are not always the most appropriate. We demonstrate mathematically and empirically that for a wide class of problems, the so-called \"cupcake problems\", there exists a simple strategy, interval reduction, that outperforms periodic monitoring. We also show how features of the environment may influence the choice of the optimal strategy. The paper concludes with some thoughts about a monitoring strategy taxonomy, and what its defining features might be. ",
+ "neighbors": [
+ 91,
+ 328
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 863,
+ "label": 3,
+ "text": "Title: Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management \nAbstract: This paper discusses issues related to Bayesian network model learning for unbalanced binary classification tasks. In general, the primary focus of current research on Bayesian network learning systems (e.g., K2 and its variants) is on the creation of the Bayesian network structure that fits the database best. It turns out that when applied with a specific purpose in mind, such as classification, the performance of these network models may be very poor. We demonstrate that Bayesian network models should be created to meet the specific goal or purpose intended for the model. We first present a goal-oriented algorithm for constructing Bayesian networks for predicting uncollectibles in telecommunications risk-management datasets. Second, we argue and demonstrate that current Bayesian network learning methods may fail to perform satisfactorily in real life applications since they do not learn models tailored to a specific goal or purpose. Third, we discuss the performance of goal oriented K2 and its variant.",
+ "neighbors": [
+ 618,
+ 884,
+ 1032
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 864,
+ "label": 3,
+ "text": "Title: Explaining Predictions in Bayesian Networks and Influence Diagrams \nAbstract: As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them. ",
+ "neighbors": [
+ 195,
+ 895
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 865,
+ "label": 0,
+ "text": "Title: Learning to Predict User Operations for Adaptive Scheduling \nAbstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling. ",
+ "neighbors": [
+ 45,
+ 866
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 866,
+ "label": 0,
+ "text": "Title: CABINS A Framework of Knowledge Acquisition and Iterative Revision for Schedule Improvement and Reactive Repair \nAbstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling. ",
+ "neighbors": [
+ 550,
+ 865
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 867,
+ "label": 3,
+ "text": "Title: Bayesian and Information-Theoretic Priors for Bayesian Network Parameters \nAbstract: We consider Bayesian and information-theoretic approaches for determining non-informative prior distributions in a parametric model family. The information-theoretic approaches are based on the recently modified definition of stochastic complexity by Rissanen, and on the Minimum Message Length (MML) approach by Wallace. The Bayesian alternatives include the uniform prior, and the equivalent sample size priors. In order to be able to empirically compare the different approaches in practice, the methods are instantiated for a model family of practical importance, the family of Bayesian networks.",
+ "neighbors": [
+ 321
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 868,
+ "label": 4,
+ "text": "Title: Using Communication to Reduce Locality in Distributed Multi-Agent Learning \nAbstract: This paper attempts to bridge the fields of machine learning, robotics, and distributed AI. It discusses the use of communication in reducing the undesirable effects of locality in fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. Two key problems, hidden state and credit assignment, are addressed by applying local undirected broadcast communication in a dual role: as sensing and as reinforcement. The methodology is demonstrated on two multi-robot learning experiments. The first describes learning a tightly-coupled coordination task with two robots, the second a loosely-coupled task with four robots learning social rules. Communication is used to 1) share sensory data to overcome hidden state and 2) share reinforcement to overcome the credit assignment problem between the agents and bridge the gap between local/individual and global/group payoff. ",
+ "neighbors": [
+ 379,
+ 402,
+ 920,
+ 939
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 869,
+ "label": 4,
+ "text": "Title: DESIGN AND ANALYSIS OF EFFICIENT REINFORCEMENT LEARNING ALGORITHMS \nAbstract: For many types of learners one can compute the statistically \"optimal\" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regres sion are both efficient and accurate.",
+ "neighbors": [
+ 257,
+ 306,
+ 461,
+ 656
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 870,
+ "label": 6,
+ "text": "Title: Characterizing Rational versus Exponential Learning Curves \nAbstract: We consider the standard problem of learning a concept from random examples. Here a learning curve can be defined to be the expected error of a learner's hypotheses as a function of training sample size. Haussler, Littlestone and Warmuth have shown that, in the distribution free setting, the smallest expected error a learner can achieve in the worst case over a concept class C converges rationally to zero error (i.e., fi(1=t) for training sample size t). However, recently Cohn and Tesauro have demonstrated how exponential convergence can often be observed in experimental settings (i.e., average error decreasing as e fi(t) ). By addressing a simple non-uniformity in the original analysis, this paper shows how the dichotomy between rational and exponential worst case learning curves can be recovered in the distribution free theory. These results support the experimental findings of Cohn and Tesauro: for finite concept classes, any consistent learner achieves exponential convergence, even in the worst case; but for continuous concept classes, no learner can exhibit sub-rational convergence for every target concept and domain distribution. A precise boundary between rational and exponential convergence is drawn for simple concept chains. Here we show that somewhere dense chains always force rational convergence in the worst case, but exponential convergence can always be achieved for nowhere dense chains.",
+ "neighbors": [
+ 556
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 871,
+ "label": 2,
+ "text": "Title: Using Sampling and Queries to Extract Rules from Trained Neural Networks \nAbstract: Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M -of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches.",
+ "neighbors": [
+ 203,
+ 367
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 872,
+ "label": 2,
+ "text": "Title: GROWING RADIAL BASIS FUNCTION NETWORKS \nAbstract: This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the Mackey-Glass temporal series is reported. ",
+ "neighbors": [
+ 357,
+ 399,
+ 430,
+ 521,
+ 931
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 873,
+ "label": 1,
+ "text": "Title: Evolving Fuzzy Prototypes for Efficient Data Clustering \nAbstract: number of prototypes used to represent each class, the position of each prototype within its class and the membership function associated with each prototype. This paper proposes a novel, evolutionary approach to data clustering and classification which overcomes many of the limitations of traditional systems. The approach rests on the optimisation of both the number and positions of fuzzy prototypes using a real-valued genetic algorithm (GA). Because the GA acts on all of the classes at once, the system benefits naturally from global information about possible class interactions. In addition, the concept of a receptive field for each prototype is used to replace the classical distance-based membership function by an infinite fuzzy support, multidimensional, Gaussian function centred over the prototype and with unique variance in each dimension, reflecting the tightness of the cluster. Hence, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP. Most data clustering algorithms, including the popular K-means algorithm, require a priori knowledge about the problem domain to fix the number and starting positions of the prototypes. Although such knowledge may be assumed for domains whose dimensionality is fairly small or whose underlying structure is relatively intuitive, it is clearly much less accessible in hyper-dimensional settings, where the number of input parameters may be very large. Classical systems also suffer from the fact that they can only define clusters for one class at a time. Hence, no account is made of potential interactions among classes. These drawbacks are further compounded by the fact that the ensuing classification is typically based on a fixed, distance-based membership function for all prototypes. This paper proposes a novel approach to data clustering and classification which overcomes the aforementioned limitations of traditional systems. The model is based on the genetic evolution of fuzzy prototypes. A real-valued genetic algorithm (GA) is used to optimise both the number and positions of prototypes. Because the GA acts on all of the classes at once and measures fitness as classification accuracy, the system naturally profits from global information about class interaction. The concept of a receptive field for each prototype is also presented and used to replace the classical, fixed distance-based function by an infinite fuzzy support membership function. The new membership function is inspired by that used in the hidden layer of RBF networks. It consists of a multidimensional Gaussian function centred over the prototype and with a unique variance in each dimension that reflects the tightness of the cluster. During classification, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP. ",
+ "neighbors": [
+ 521
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 874,
+ "label": 6,
+ "text": "Title: Improved Bounds about On-line Learning of Smooth Functions of a Single Variable \nAbstract: We consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose F q is the set of all absolutely continuous functions f from [0; 1] to R such that jjf 0 jj q 1, and opt(F q ; m) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q 2, opt(F q ; m) = fi(",
+ "neighbors": [
+ 764
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 875,
+ "label": 5,
+ "text": "Title: Estimating Attributes: Analysis and Extensions of RELIEF \nAbstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.",
+ "neighbors": [
+ 118,
+ 242,
+ 509,
+ 574,
+ 576,
+ 577,
+ 612,
+ 665,
+ 828,
+ 934,
+ 936,
+ 953
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 876,
+ "label": 1,
+ "text": "Title: Average-Case Analysis of a Nearest Neighbor Algorithm \nAbstract: Eugenic Evolution for Combinatorial Optimization John William Prior Report AI98-268 May 1998 ",
+ "neighbors": [
+ 91,
+ 197,
+ 682,
+ 683,
+ 1154
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 877,
+ "label": 1,
+ "text": "Title: The Coevolution of Mutation Rates \nAbstract: In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of genes for longevity and mutation rate in the individuals. This made it possible for a lineage to evolve to be immortal. It also allowed the evolution of no mutation or extremely high mutation rates. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes. ",
+ "neighbors": [
+ 91,
+ 645
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 878,
+ "label": 4,
+ "text": "Title: Genetics-based Machine Learning and Behaviour Based Robotics: A New Synthesis complexity grows, the learning task\nAbstract: difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a",
+ "neighbors": [
+ 91,
+ 372,
+ 529,
+ 824,
+ 1144
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 879,
+ "label": 0,
+ "text": "Title: Probabilistic Instance-Based Learning \nAbstract: Traditional instance-based learning methods base their predictions directly on (training) data that has been stored in the memory. The predictions are based on weighting the contributions of the individual stored instances by a distance function implementing a domain-dependent similarity metrics. This basic approach suffers from three drawbacks: com-putationally expensive prediction when the database grows large, overfitting in the presence of noisy data, and sensitivity to the selection of a proper distance function. We address all these issues by giving a probabilistic interpretation to instance-based learning, where the goal is to approximate predictive distributions of the attributes of interest. In this probabilistic view the instances are not individual data items but probability distributions, and we perform Bayesian inference with a mixture of such prototype distributions. We demonstrate the feasibility of the method empirically for a wide variety of public domain classification data sets.",
+ "neighbors": [
+ 277,
+ 580
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 880,
+ "label": 1,
+ "text": "Title: A Comparative Study of Genetic Search \nAbstract: We present a comparative study of genetic algorithms and their search properties when treated as a combinatorial optimization technique. This is done in the context of the NP-hard problem MAX-SAT, the comparison being relative to the Metropolis process, and by extension, simulated annealing. Our contribution is two-fold. First, we show that for large and difficult MAX-SAT instances, the contribution of cross-over to the search process is marginal. Little is lost if it is dispensed altogether, running mutation and selection as an enlarged Metropolis process. Second, we show that for these problem instances, genetic search consistently performs worse than simulated annealing when subject to similar resource bounds. The correspondence between the two algorithms is made more precise via a decomposition argument, and provides a framework for interpreting our results. ",
+ "neighbors": [
+ 91,
+ 643,
+ 732
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 881,
+ "label": 6,
+ "text": "Title: What do Constructive Learners Really Learn? \nAbstract: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and non-constructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that ",
+ "neighbors": [
+ 216,
+ 239,
+ 710,
+ 892
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 882,
+ "label": 5,
+ "text": "Title: SFOIL: Stochastic Approach to Inductive Logic Programming \nAbstract: Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach. ",
+ "neighbors": [
+ 509,
+ 576,
+ 604,
+ 665,
+ 907,
+ 921
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 883,
+ "label": 4,
+ "text": "Title: A Study of the Generalization Capabilities of XCS \nAbstract: We analyze the generalization behavior of the XCS classifier system in environments in which only a few generalizations can be done. Experimental results presented in the paper evidence that the generalization mechanism of XCS can prevent it from learning even simple tasks in such environments. We present a new operator, named Specify, which contributes to the solution of this problem. XCS with the Specify operator, named XCSS, is compared to XCS in terms of performance and generalization capabilities in different types of environments. Experimental results show that XCSS can deal with a greater variety of environments and that it is more robust than XCS with respect to population size.",
+ "neighbors": [
+ 843
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 884,
+ "label": 3,
+ "text": "Title: Efficient Learning of Selective Bayesian Network Classifiers \nAbstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.",
+ "neighbors": [
+ 369,
+ 618,
+ 863,
+ 1032,
+ 1342
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 885,
+ "label": 2,
+ "text": "Title: Evolutionary Design of Neural Architectures A Preliminary Taxonomy and Guide to Literature \nAbstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.",
+ "neighbors": [
+ 522,
+ 1236,
+ 1295
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 886,
+ "label": 0,
+ "text": "Title: Towards a Theory of Optimal Similarity Measures way of learning a similarity measure from the\nAbstract: The effectiveness of a case-based reasoning system is known to depend critically on its similarity measure. However, it is not clear whether there are elusive and esoteric similarity measures which might improve the performance of a case-based reasoner if substituted for the more commonly used measures. This paper therefore deals with the problem of choosing the best similarity measure, in the limited context of instance-based learning of classifications of a discrete example space. We consider both `fixed' similarity measures and `learnt' ones. In the former case, we give a definition of a similarity measure which we believe to be `optimal' w.r.t. the current prior distribution of target concepts and prove its optimality within a restricted class of similarity measures. We then show how this `optimal' similarity measure is instantiated by some specific prior distributions, and conclude that a very simple similarity measure is as good as any other in these cases. In a further section, we then show how our definition leads naturally to a conjecture about the ",
+ "neighbors": [
+ 743,
+ 1129
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 887,
+ "label": 1,
+ "text": "Title: Automatic Modularization by Speciation \nAbstract: Real-world problems are often too difficult to be solved by a single monolithic system. There are many examples of natural and artificial systems which show that a modular approach can reduce the total complexity of the system while solving a difficult problem satisfactorily. The success of modular artificial neural networks in speech and image processing is a typical example. However, designing a modular system is a difficult task. It relies heavily on human experts and prior knowledge about the problem. There is no systematic and automatic way to form a modular system for a problem. This paper proposes a novel evolutionary learning approach to designing a modular system automatically, without human intervention. Our starting point is speciation, using a technique based on fitness sharing. While speciation in genetic algorithms is not new, no effort has been made towards using a speciated population as a complete modular system. We harness the specialized expertise in the species of an entire population, rather than a single individual, by introducing a gating algorithm. We demonstrate our approach to automatic modularization by improving co-evolutionary game learning. Following earlier researchers, we learn to play iterated prisoner's dilemma. We review some problems of earlier co-evolutionary learning, and explain their poor generalization ability and sudden mass extinctions. The generalization ability of our approach is significantly better than past efforts. Using the specialized expertise of the entire speciated population though a gating algorithm, instead of the best individual, is the main contributor to this improvement. ",
+ "neighbors": [
+ 632,
+ 634,
+ 1206
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 888,
+ "label": 4,
+ "text": "Title: Learning to Sense Selectively in Physical Domains \nAbstract: In this paper we describe an approach to representing, using, and improving sensory skills for physical domains. We present Icarus, an architecture that represents control knowledge in terms of durative states and sequences of such states. The system operates in cycles, activating a state that matches the environmental situation and letting that state control behavior until its conditions fail or until finding another matching state with higher priority. Information about the probability that conditions will remain satisfied minimizes demands on sensing, as does knowledge about the durations of states and their likely successors. Three statistical learning methods let the system gradually reduce sensory load as it gains experience in a domain. We report experimental evaluations of this ability on three simulated physical tasks: flying an aircraft, steering a truck, and balancing a pole. Our experiments include lesion studies that identify the reduction in sensing due to each of the learning mechanisms and others that examine the effect of domain characteristics. ",
+ "neighbors": [
+ 529
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 889,
+ "label": 2,
+ "text": "Title: Unsupervised Learning by Convex and Conic Coding \nAbstract: Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer.",
+ "neighbors": [
+ 17,
+ 19,
+ 944
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 890,
+ "label": 2,
+ "text": "Title: A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction \nAbstract: Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. dolce is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. dolce learns to recover the discrete state with maximum a posteriori probability from the noisy state. Simulations show that dolce leads to a significant improvement in generalization performance over earlier neural net approaches to FSM induction.",
+ "neighbors": [
+ 230,
+ 656,
+ 662,
+ 726,
+ 728,
+ 958
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 891,
+ "label": 3,
+ "text": "Title: Boltzmann Chains and Hidden Markov Models \nAbstract: We propose a statistical mechanical framework for the modeling of discrete time series. Maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights. We call these networks Boltzmann chains and show that they contain hidden Markov models (HMMs) as a special case. Our framework also motivates new architectures that address particular shortcomings of HMMs. We look at two such architectures: parallel chains that model feature sets with disparate time scales, and looped networks that model long-term dependencies between hidden states. For these networks, we show how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing. The necessary computations are done by exact decimation procedures from statistical mechanics.",
+ "neighbors": [
+ 633,
+ 723,
+ 799,
+ 813
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 892,
+ "label": 6,
+ "text": "Title: ID2-of-3: Constructive Induction of M of-N Concepts for Discriminators in Decision Trees \nAbstract: We discuss an approach to constructing composite features during the induction of decision trees. The composite features correspond to m-of-n concepts. There are three goals of this research. First, we explore a family of greedy methods for building m-of-n concepts (one of which, GS, is described in this paper). Second, we show how these concepts can be formed as internal nodes of decision trees, serving as a bias to the learner. Finally, we evaluate the method on several artificially generated and naturally occurring data sets to determine the effects of this bias.",
+ "neighbors": [
+ 81,
+ 485,
+ 881,
+ 925,
+ 974,
+ 1008,
+ 1009,
+ 1057,
+ 1340
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 893,
+ "label": 0,
+ "text": "Title: an Opportunistic Enterprise \nAbstract: Tech Report GIT-COGSCI-97/04 Abstract This paper identifies goal handling processes that begin to account for the kind of processes involved in invention. We identify new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. We focus on invention goals, which address significant enterprises associated with an inventor. Invention goals represent seed goals of an expert, around which the whole knowledge of an expert gets reorganized and grows more or less opportunistically. Invention goals reflect the idiosyncrasy of thematic goals among experts. They constantly increase the sensitivity of individuals for particular events that might contribute to their satisfaction. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We propose mechanisms to explain: (1) how Bell's early thematic goals gave rise to the new goals to invent the multiple telegraph and the telephone, and (2) how the new goals interacted opportunistically. Finally, we describe our computational model, ALEC, that accounts for the role of goals in invention. ",
+ "neighbors": [
+ 278,
+ 644,
+ 649,
+ 854
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 894,
+ "label": 2,
+ "text": "Title: Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs. Connectionist Models \nAbstract: Learning the past tense of English verbs | a seemingly minor aspect of language acquisition | has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms. ",
+ "neighbors": [
+ 124,
+ 653,
+ 796,
+ 917,
+ 1245
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 895,
+ "label": 3,
+ "text": "Title: Defining Explanation in Probabilistic Systems \nAbstract: As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature one due to G ardenfors and one due to Pearland show that both suffer from significant problems. We propose an approach to defining a notion of better explanation that combines some of the features of both together with more recent work by Pearl and others on causality.",
+ "neighbors": [
+ 195,
+ 557,
+ 742,
+ 864
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 896,
+ "label": 2,
+ "text": "Title: Analytic Comparison of Nonlinear H 1 -Norm Bounding Techniques for Low Order Systems with Saturation \nAbstract: A cooperative coevolutionary approach to learning complex structures is presented which, although preliminary in nature, appears to have a number of advantages over non-coevolutionary approaches. The cooperative coevolutionary approach encourages the parallel evolution of substructures which interact in useful ways to form more complex higher level structures. The architecture is designed to be general enough to permit the inclusion, if appropriate, of a priori knowledge in the form of initial biases towards particular kinds of decompositions. A brief summary of initial results obtained from testing this architecture in several problem domains is presented which shows a significant speedup over more traditional non-coevolutionary approaches. ",
+ "neighbors": [
+ 713,
+ 805
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 897,
+ "label": 6,
+ "text": "Title: Learning from the Environment by Experimentation: The Need for Few and Informative Examples \nAbstract: An intelligent system must be able to adapt and learn to correct and update its model of the environment incrementally and deliberately. In complex environments that have many parameters and where interactions have a cost, sampling the possible range of states to test the results of action executions is not a practical approach. We present a practical approach based on continuous and selective interaction with the environment that pinpoints the type of fault in the domain knowledge that causes any unexpected behavior of the environment, and resorts to experimentation when additional information is needed to correct the system's knowledge. ",
+ "neighbors": [
+ 832
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 898,
+ "label": 2,
+ "text": "Title: Pruning Recurrent Neural Networks for Improved Generalization Performance \nAbstract: Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic which significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that if rules are extracted from networks trained to recognize these strings, that rules extracted after pruning are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state triple parity grammar. Further simulations indicate that this pruning method can gives generalization performance superior to that obtained by training with weight decay.",
+ "neighbors": [
+ 726,
+ 1231
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 899,
+ "label": 6,
+ "text": "Title: Characterizing the generalization performance of model selection strategies \nAbstract: We investigate the structure of model selection problems via the bias/variance decomposition. In particular, we characterize the essential aspects of a model selection task by the bias and variance profiles it generates over the sequence of hypothesis classes. With this view, we develop a new understanding of complexity-penalization methods: First, the penalty terms can be interpreted as postulating a particular profile for the variances as a function of model complexityif the postulated and true profiles do not match, then systematic under-fitting or over-fitting results, depending on whether the penalty terms are too large or too small. Second, we observe that it is generally best to penalize according to the true variances of the task, and therefore no fixed penalization strategy is optimal across all problems. We then use this characterization to introduce the notion of easy versus hard model selection problems. Here we show that if the variance profile grows too rapidly in relation to the biases, then standard model selection techniques become prone to significant errors. This can happen, for example, in regression problems where the independent variables are drawn from wide-tailed distributions. To counter this, we discuss a new model selection strategy that dramatically outperforms standard complexity-penalization and hold-out meth ods on these hard tasks.",
+ "neighbors": [
+ 493,
+ 601,
+ 686,
+ 747
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 900,
+ "label": 3,
+ "text": "Title: Combining estimates in regression and classification \nAbstract: We consider the problem of how to combine a collection of general regression fit vectors in order to obtain a better predictive model. The individual fits may be from subset linear regression, ridge regression, or something more complex like a neural network. We develop a general framework for this problem and examine a recent cross-validation-based proposal called \"stacking\" in this context. Combination methods based on the bootstrap and analytic methods are also derived and compared in a number of examples, including best subsets regression and regression trees. Finally, we apply these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem.",
+ "neighbors": [
+ 684
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 901,
+ "label": 2,
+ "text": "Title: Using Fourier-Neural Recurrent Networks to Fit Sequential Input/Output Data \nAbstract: This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form.",
+ "neighbors": [
+ 588
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 902,
+ "label": 1,
+ "text": "Title: Island Model Genetic Algorithms and Linearly Separable Problems \nAbstract: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems.",
+ "neighbors": [
+ 56,
+ 91,
+ 652,
+ 777
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 903,
+ "label": 2,
+ "text": "Title: A Provably Convergent Dynamic Training Method for Multilayer Perceptron Networks \nAbstract: This paper presents a new method for training multilayer perceptron networks called DMP1 (Dynamic Multilayer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a gentetic algorithm. The method is capable of handling real-valued inputs and a proof is given concerning its convergence properties of the basic model. Simulation results show that DMP1 performs favorably in comparison with other learning algorithms. ",
+ "neighbors": [
+ 470,
+ 690,
+ 912
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 904,
+ "label": 0,
+ "text": "Title: Knowledge Discovery in International Conflict Databases \nAbstract: Artificial Intelligence is heavily supported by military institutions, while practically no effort goes into the investigation of possible contributions of AI to the avoidance and termination of crises and wars. This paper makes a first step into this direction by investigating the use of machine learning techniques for discovering knowledge in international conflict and conflict management databases. We have applied similarity-based case retrieval to the KOSIMO database of international conflicts. Furthermore, we present results of analyzing the CONFMAN database of successful and unsuccessful conflict management attempts with an inductive decision tree learning algorithm. The latter approach seems to be particularly promising, as conflict management events apparently are more repetitive and thus better suited for machine-aided analysis. ",
+ "neighbors": [
+ 133,
+ 242
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 905,
+ "label": 6,
+ "text": "Title: Selection of Relevant Features and Examples in Machine Learning \nAbstract: In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area. ",
+ "neighbors": [
+ 631,
+ 684,
+ 1211
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 906,
+ "label": 0,
+ "text": "Title: Evaluating the Effectiveness of Derivation Replay in Partial-order vs State-space Planning \nAbstract: Case-based planning involves storing individual instances of problem-solving episodes and using them to tackle new planning problems. This paper is concerned with derivation replay, which is the main component of a form of case-based planning called derivational analogy (DA). Prior to this study, implementations of derivation replay have been based within state-space planning. We are motivated by the acknowledged superiority of partial-order (PO) planners in plan generation. Here we demonstrate that plan-space planning also has an advantage in replay. We will argue that the decoupling of planning (derivation) order and the execution order of plan steps, provided by partial-order planners, enables them to exploit the guidance of previous cases in a more efficient and straightforward fashion. We validate our hypothesis through a focused empirical comparison. ",
+ "neighbors": [
+ 173,
+ 348,
+ 435,
+ 478,
+ 672
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 907,
+ "label": 5,
+ "text": "Title: Stochastic Propositionalization of Non-Determinate Background Knowledge \nAbstract: It is a well-known fact that propositional learning algorithms require \"good\" features to perform well in practice. So a major step in data engineering for inductive learning is the construction of good features by domain experts. These features often represent properties of structured objects, where a property typically is the occurrence of a certain substructure having certain properties. To partly automate the process of \"feature engineering\", we devised an algorithm that searches for features which are defined by such substructures. The algorithm stochastically conducts a top-down search for first-order clauses, where each clause represents a binary feature. It differs from existing algorithms in that its search is not class-blind, and that it is capable of considering clauses (\"context\") of almost arbitrary length (size). Preliminary experiments are favorable, and support the view that this approach is promising.",
+ "neighbors": [
+ 198,
+ 735,
+ 796,
+ 882
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 908,
+ "label": 5,
+ "text": "Title: Inductive Learning of Characteristic Concept Descriptions from Small Sets of Classified Examples \nAbstract: This paper presents a novel idea to the problem of learning concept descriptions from examples. Whereas most existing approaches rely on a large number of classified examples, the approach presented in the paper is aimed at being applicable when only a few examples are classified as positive (and negative) instances of a concept. The approach tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. The system Cola is described and results of applying Cola in two real-world domains are presented. ",
+ "neighbors": [
+ 198,
+ 663
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 909,
+ "label": 2,
+ "text": "Title: Regional Stability of an ERS/JERS-1 Classifer \nAbstract: The potential of combined ERS/JERS-1 SAR images for land cover classification was demonstrated for the Raco test site (Michigan) in recent papers and articles. Our goal is to develop a classification algorithm which is stable in terms of applicability in different geographical regions. Unlike optical remote sensing techniques, radar remote sensing can provide calibrated data where the image signal is solely determined by the physical (structural) and electrical properties of the targets on the Earth's surface and near subsurface. Hence, a classifier based on radar signatures of object classes should be applicable on new calibrated images without the need to train the classifier again. This article discusses the design and applicability of a classification algorithm, which is based on calibrated radar signatures measured from ERS-1 (C-band, vv polarized) and JERS-1 (L-band, hh polarized) SAR image data. The applicability is compared in two different test sites, Raco, Michigan and the Cedar Creek LTER site, Minnesota. It was found, that classes separate very well, when certain boundary conditions like comparable seasonality or soil moisture conditions are observed. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 910,
+ "label": 1,
+ "text": "Title: A Survey of Intron Research in Genetics \nAbstract: A brief survey of biological research on non-coding DNA is presented here. There has been growing interest in the effects of non-coding segments in evolutionary algorithms (EAs). To better understand and conduct research on non-coding segments and EAs, it is important to understand the biological background of such work. This paper begins with a review of basic genetics and terminology, describes the different types of non-coding DNA, and then surveys recent intron research.",
+ "neighbors": [
+ 542,
+ 1205,
+ 1311,
+ 1314
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 911,
+ "label": 0,
+ "text": "Title: Context-Sensitive Feature Selection for Lazy Learners \nAbstract: ",
+ "neighbors": [
+ 564,
+ 612,
+ 936
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 912,
+ "label": 2,
+ "text": "Title: The Effect of Decision Surface Fitness on Dynamic Multilayer Perceptron Networks (DMP1) \nAbstract: The DMP1 (Dynamic Multilayer Perceptron 1) network training method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. This paper introduces the DMP1 method, and compares the preformance of DMP1 when using the standard delta rule training method for training individual nodes against the performance of DMP1 when using a genetic algorithm for training. While the basic model does not require the use of a genetic algorithm for training individual nodes, the results show that the convergence properties of DMP1 are enhanced by the use of a genetic algorithm with an appropriate fitness function. ",
+ "neighbors": [
+ 470,
+ 903
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 913,
+ "label": 0,
+ "text": "Title: KRITIK: AN EARLY CASE-BASED DESIGN SYSTEM \nAbstract: In the late 1980s, we developed one of the early case-based design systems called Kritik. Kritik autonomously generated preliminary (conceptual, qualitative) designs for physical devices by retrieving and adapting past designs stored in its case memory. Each case in the system had an associated structure-behavior-function (SBF) device model that explained how the structure of the device accomplished its functions. These casespecific device models guided the process of modifying a past design to meet the functional specification of a new design problem. The device models also enabled verification of the design modifications. Kritik2 is a new and more complete implementation of Kritik. In this paper, we take a retrospective view on Kritik. In early papers, we had described Kritik as integrating case-based and model-based reasoning. In this integration, Kritik also grounds the computational process of case-based reasoning in the SBF content theory of device comprehension. The SBF models not only provide methods for many specific tasks in case-based design such as design adaptation and verification, but they also provide the vocabulary for the whole process of case-based design, from retrieval of old cases to storage of new ones. This grounding, we believe, is essential for building well-constrained theories of case-based design. ",
+ "neighbors": [
+ 310,
+ 352,
+ 755,
+ 762
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 914,
+ "label": 3,
+ "text": "Title: Learning Bayesian Networks from Incomplete Data \nAbstract: Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data. ",
+ "neighbors": [
+ 321,
+ 618
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 915,
+ "label": 0,
+ "text": "Title: CHIRON: Planning in an Open-Textured Domain \nAbstract: Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data. ",
+ "neighbors": [
+ 182,
+ 775
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 916,
+ "label": 4,
+ "text": "Title: Learning to coordinate without sharing information \nAbstract: Researchers in the field of Distributed Artificial Intelligence (DAI) have been developing efficient mechanisms to coordinate the activities of multiple autonomous agents. The need for coordination arises because agents have to share resources and expertise required to achieve their goals. Previous work in the area includes using sophisticated information exchange protocols, investigating heuristics for negotiation, and developing formal models of possibilities of conflict and cooperation among agent interests. In order to handle the changing requirements of continuous and dynamic environments, we propose learning as a means to provide additional possibilities for effective coordination. We use reinforcement learning techniques on a block pushing problem to show that agents can learn complimentary policies to follow a desired path without any knowledge about each other. We theoretically analyze and experimentally verify the effects of learning rate on system convergence, and demonstrate benefits of using learned coordination knowledge on similar problems. Reinforcement learning based coordination can be achieved in both cooperative and non-cooperative domains, and in domains with noisy communication channels and other stochastic characteristics that present a formidable challenge to using other coordination schemes. ",
+ "neighbors": [
+ 328,
+ 378,
+ 449,
+ 920
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 917,
+ "label": 2,
+ "text": "Title: A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping \nAbstract: The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping.",
+ "neighbors": [
+ 185,
+ 188,
+ 217,
+ 261,
+ 407,
+ 724,
+ 743,
+ 894,
+ 956,
+ 1008,
+ 1009,
+ 1057,
+ 1245,
+ 1319
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 918,
+ "label": 2,
+ "text": "Title: Acquiring the mapping from meaning to sounds \nAbstract: 1 We thank Steen Ladegaard Knudsen for his assistance in programming, analysis and running of simulations, Scott Baden for his assistance in vectorizing our code for the Cray Y-MP, the Division of Engineering Block Grant for time on the Cray at the San Diego Supercomputer Center, and the members of the PDPNLP and GURU Research Groups at UCSD for helpful comments on earlier versions of this work. ",
+ "neighbors": [
+ 114,
+ 272
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 919,
+ "label": 0,
+ "text": "Title: Creative Design: Reasoning and Understanding \nAbstract: This paper investigates memory issues that influence long- term creative problem solving and design activity, taking a case-based reasoning perspective. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We abstract Bell's reasoning and understanding mechanisms that appear time and again in long-term creative design. We identify that the understanding mechanism is responsible for analogical anticipation of design constraints and analogical evaluation, beside case-based design. But an already understood design can satisfy opportunistically suspended design problems, still active in background. The new mechanisms are integrated in a computational model, ALEC 1 , that accounts for some creative be ",
+ "neighbors": [
+ 762
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 920,
+ "label": 4,
+ "text": "Title: Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents \nAbstract: Intelligent human agents exist in a cooperative social environment that facilitates learning. They learn not only by trial-and-error, but also through cooperation by sharing instantaneous information, episodic experience, and learned knowledge. The key investigations of this paper are, \"Given the same number of reinforcement learning agents, will cooperative agents outperform independent agents who do not communicate during learning?\" and \"What is the price for such cooperation?\" Using independent agents as a benchmark, cooperative agents are studied in following ways: (1) sharing sensation, (2) sharing episodes, and (3) sharing learned policies. This paper shows that (a) additional sensation from another agent is beneficial if it can be used efficiently, (b) sharing learned policies or episodes among agents speeds up learning at the cost of communication, and (c) for joint tasks, agents engaging in partnership can significantly outperform independent agents although they may learn slowly in the beginning. These tradeoffs are not just limited to multi-agent reinforcement learning.",
+ "neighbors": [
+ 80,
+ 402,
+ 689,
+ 868,
+ 916,
+ 939
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 921,
+ "label": 5,
+ "text": "Title: An application of ILP in a musical database: learning to compose the two-voice counterpoint \nAbstract: We describe SFOIL, a descendant of FOIL that uses the advanced stochastic search heuristic, and its application in learning to compose the two-voice counterpoint. The application required learning a 4-ary relation from more than 20.000 training instances. SFOIL is able to efficiently deal with this learning task which is to our knowledge one of the most complex learning task solved by an ILP system. This demonstrates that ILP systems can scale up to real databases and that top-down ILP systems that use the covering approach and advanced search strategies are appropriate for knowledge discovery in databases and are promising for further investigation. ",
+ "neighbors": [
+ 509,
+ 576,
+ 604,
+ 882
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 922,
+ "label": 0,
+ "text": "Title: Problem Solving for Redesign \nAbstract: A knowledge-level analysis of complex tasks like diagnosis and design can give us a better understanding of these tasks in terms of the goals they aim to achieve and the different ways to achieve these goals. In this paper we present a knowledge-level analysis of redesign. Redesign is viewed as a family of methods based on some common principles, and a number of dimensions along which redesign problem solving methods can vary are distinguished. By examining the problem-solving behavior of a number of existing redesign systems and approaches, we came up with a collection of problem-solving methods for redesign and developed a task-method structure for redesign. In constructing a system for redesign a large number of knowledge-related choices and decisions are made. In order to describe all relevant choices in redesign problem solving, we have to extend the current notion of possible relations between tasks and methods in a PSM architecture. The realization of a task by a problem-solving method, and the decomposition of a problem-solving method into subtasks are the most common relations in a PSM architecture. However, we suggest to extend these relations with the notions of task refinement and method refinement. These notions represent intermediate decisions in a task-method structure, in which the competence of a task or method is refined without immediately paying attention to its operationalization in terms of subtasks. Explicit representation of this kind of intermediate decisions helps to make and represent decisions in a more piecemeal fashion. ",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 923,
+ "label": 3,
+ "text": "Title: Hyperparameter estimation in Dirichlet process mixture models \nAbstract: In Bayesian density estimation and prediction using Dirichlet process mixtures of standard, exponential family distributions, the precision or total mass parameter of the mixing Dirichlet process is a critical hyperparame-ter that strongly influences resulting inferences about numbers of mixture components. This note shows how, with respect to a flexible class of prior distributions for this parameter, the posterior may be represented in a simple conditional form that is easily simulated. As a result, inference about this key quantity may be developed in tandem with the existing, routine Gibbs sampling algorithms for fitting such mixture models. The concept of data augmentation is important, as ever, in developing this extension of the existing algorithm. A final section notes an simple asymptotic approx imation to the posterior. ",
+ "neighbors": [
+ 495,
+ 498,
+ 750
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 924,
+ "label": 2,
+ "text": "Title: Unsupervised Neural Network Learning Procedures For Feature Extraction and Classification \nAbstract: Technical report CNS-TR-95-1 Center for Neural Systems McMaster University ",
+ "neighbors": [
+ 303,
+ 422,
+ 1217
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 925,
+ "label": 2,
+ "text": "Title: Using Neural Networks to Automatically Refine Expert System Knowledge Bases: Experiments in the NYNEX MAX Domain \nAbstract: In this paper we describe our study of applying knowledge-based neural networks to the problem of diagnosing faults in local telephone loops. Currently, NYNEX uses an expert system called MAX to aid human experts in diagnosing these faults; however, having an effective learning algorithm in place of MAX would allow easy portability between different maintenance centers, and easy updating when the phone equipment changes. We find that (i) machine learning algorithms have better accuracy than MAX, (ii) neural networks perform better than decision trees, (iii) neural network ensembles perform better than standard neural networks, (iv) knowledge-based neural networks perform better than standard neural networks, and (v) an ensemble of knowledge-based neural networks performs the best. ",
+ "neighbors": [
+ 792,
+ 814,
+ 892
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 926,
+ "label": 2,
+ "text": "Title: In Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections \nAbstract: Human vision systems integrate information nonlocally, across long spatial ranges. For example, a moving stimulus appears smeared when viewed briefly (30 ms), yet sharp when viewed for a longer exposure (100 ms) (Burr, 1980). This suggests that visual systems combine information along a trajectory that matches the motion of the stimulus. Our self-organizing neural network model shows how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways that unsmear representations of moving stimuli. These results account for Burr's data and can potentially also model other phenomena, such as visual inertia.",
+ "neighbors": [
+ 620
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 927,
+ "label": 6,
+ "text": "Title: Simulating Access to Hidden Information while Learning \nAbstract: We introduce a new technique which enables a learner without access to hidden information to learn nearly as well as a learner with access to hidden information. We apply our technique to solve an open problem of Maass and Turan [18], showing that for any concept class F , the least number of queries sufficient for learning F by an algorithm which has access only to arbitrary equivalence queries is at most a factor of 1= log 2 (4=3) more than the least number of queries sufficient for learning F by an algorithm which has access to both arbitrary equivalence queries and membership queries. Previously known results imply that the 1= log 2 (4=3) in our bound is best possible. We describe analogous results for two generalizations of this model to function learning, and apply those results to bound the difficulty of learning in the harder of these models in terms of the difficulty of learning in the easier model. We bound the difficulty of learning unions of k concepts from a class F in terms of the difficulty of learning F . We bound the difficulty of learning in a noisy environment for deterministic algorithms in terms of the difficulty of learning in a noise-free environment. We apply a variant of our technique to develop an algorithm transformation that allows probabilistic learning algorithms to nearly optimally cope with noise. A second variant enables us to improve a general lower bound of Turan [19] for the PAC-learning model (with queries). Finally, we show that logarithmically many membership queries never help to obtain computationally efficient learning algorithms. fl Supported by Air Force Office of Scientific Research grant F49620-92-J-0515. Most of this work was done while this author was at TU Graz supported by a Lise Meitner Fellowship from the Fonds zur Forderung der wissenschaftlichen Forschung (Austria). ",
+ "neighbors": [
+ 62,
+ 255,
+ 461,
+ 764
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 928,
+ "label": 0,
+ "text": "Title: Conceptual Analogy: Conceptual clustering for informed and efficient analogical reasoning \nAbstract: Conceptual analogy (CA) is a general approach that applies conceptual clustering and concept representations to facilitate the efficient use of past experiences (cases) during analogical reasoning (Borner 1995). The approach was developed and implemented in SYN* (see also (Borner 1994, Borner and Faauer 1995)) to support the design of supply nets in building engineering. This paper sketches the task; it outlines the nearest-neighbor-based agglomerative conceptual clustering applied in organizing large amounts of structured cases into case classes; it provides the concept representation used to characterize case classes and shows the analogous solution of new problems based on the concepts available. However, the main purpose of this paper is to evaluate CA in terms of its reasoning efficiency; its capability to derive solutions that go beyond the cases in the case base but still preserve the quality of cases.",
+ "neighbors": [
+ 309,
+ 512
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 929,
+ "label": 2,
+ "text": "Title: Space-Frequency Localized Basis Function Networks for Nonlinear System Estimation and Control \nAbstract: Stable neural network control and estimation may be viewed formally as a merging of concepts from nonlinear dynamic systems theory with tools from multivariate approximation theory. This paper extends earlier results on adaptive control and estimation of nonlinear systems using gaussian radial basis functions to the on-line generation of irregularly sampled networks, using tools from multiresolution analysis and wavelet theory. This yields much more compact and efficient system representations while preserving global closed-loop stability. Approximation models employing basis functions that are localized in both space and spatial frequency admit a measure of the approximated function's spatial frequency content that is not directly dependent on reconstruction error. As a result, these models afford a means of adaptively selecting basis functions according to the local spatial frequency content of the approximated function. An algorithm for stable, on-line adaptation of output weights simultaneously with node configuration in a class of non-parametric models with wavelet basis functions is presented. An asymptotic bound on the error in the network's reconstruction is derived and shown to be dependent solely on the minimum approximation error associated with the steady state node configuration. In addition, prior bounds on the temporal bandwidth of the system to be identified or controlled are used to develop a criterion for on-line selection of radial and ridge wavelet basis functions, thus reducing the rate of increase in network's size with the dimension of the state vector. Experimental results obtained by using the network to predict the path of an unknown light bluff object thrown through air, in an active-vision based robotic catching system, are given to illustrate the network's performance in a simple real-time application. ",
+ "neighbors": [
+ 357,
+ 830,
+ 1033,
+ 1229
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 930,
+ "label": 6,
+ "text": "Title: From: Computational Learning Theory and Natural Systems, Chapter 18, \"Cross-validation and Modal Theories\", Cross-Validation and\nAbstract: Cross-validation is a frequently used, intuitively pleasing technique for estimating the accuracy of theories learned by machine learning algorithms. During testing of a machine learning algorithm (foil) on new databases of prokaryotic RNA transcription promoters which we have developed, cross-validation displayed an interesting phenomenon. One theory is found repeatedly and is responsible for very little of the cross-validation error, whereas other theories are found very infrequently which tend to be responsible for the majority of the cross-validation error. It is tempting to believe that the most frequently found theory (the \"modal theory\") may be more accurate as a classifier of unseen data than the other theories. However, experiments showed that modal theories are not more accurate on unseen data than the other theories found less frequently during cross-validation. Modal theories may be useful in predicting when cross-validation is a poor estimate of true accuracy. We offer explanations 1 For correspondence: Department of Computer Science and Engineering, University of California, San ",
+ "neighbors": [
+ 198
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 931,
+ "label": 2,
+ "text": "Title: Learning Controllers for Industrial Robots \nAbstract: One of the most significant cost factors in robotics applications is the design and development of real-time robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of non-linear controllers, although in many cases (such as in compliance control), nonlinear control is essential for achieving high performance. This paper discusses how Machine Learning has been applied to the design of (non-)linear controllers. Several alternative function approximators, including Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers are analyzed and compared, leading to the definition of two major families: Open Field Function Function Approximators and Locally Receptive Field Function Approximators. It is shown that RBFNs and Fuzzy Controllers bear strong similarities, and that both have a symbolic interpretation. This characteristics allows for applying both symbolic and statistic learning algorithms to synthesize the network layout from a set of examples and, possibly, some background knowledge. Three integrated learning algorithms, two of which are original, are described and evaluated on experimental test cases. The first test case is provided by a robot KUKA IR-361 engaged into the \"peg-into-hole\" task, whereas the second is represented by a classical prediction task on the Mackey-Glass time series. From the experimental comparison, it appears that both Fuzzy Controllers and RBFNs synthesised from examples are excellent approximators, and that, in practice, they can be even more accurate than MLPs. ",
+ "neighbors": [
+ 169,
+ 404,
+ 521,
+ 524,
+ 759,
+ 872
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 932,
+ "label": 0,
+ "text": "Title: Proceedings of CogSci89 Structural Evaluation of Analogies: What Counts? \nAbstract: Judgments of similarity and soundness are important aspects of human analogical processing. This paper explores how these judgments can be modeled using SME, a simulation of Gentner's structure-mapping theory. We focus on structural evaluation, explicating several principles which psychologically plausible algorithms should follow. We introduce the Specificity Conjecture, which claims that naturalistic representations include a preponderance of appearance and low-order information. We demonstrate via computational experiments that this conjecture affects how structural evaluation should be performed, including the choice of normalization technique and how the systematicity preference is implemented. ",
+ "neighbors": [
+ 637,
+ 761,
+ 935
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 933,
+ "label": 6,
+ "text": "Title: Induction of One-Level Decision Trees \nAbstract: In recent years, researchers have made considerable progress on the worst-case analysis of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to exper imental results to check our reasoning. ",
+ "neighbors": [
+ 217,
+ 751
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 934,
+ "label": 5,
+ "text": "Title: Machine learning in prognosis of the femoral neck fracture recovery examples, estimating attributes, explanation ability,\nAbstract: We compare the performance of several machine learning algorithms in the problem of prognos-tics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability. ",
+ "neighbors": [
+ 367,
+ 875,
+ 953
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 935,
+ "label": 0,
+ "text": "Title: Making SME greedy and pragmatic \nAbstract: The Structure-Mapping Engine (SME) has successfully modeled several aspects of human consistent interpretations of an analogy. While useful for theoretical explorations, this aspect of the algorithm is both psychologically implausible and computationally inefficient. (2) SME contains no mechanism for focusing on interpretations relevant to an analogizer's goals. This paper describes modifications to SME which overcome these flaws. We describe a greedy merge algorithm which efficiently computes an approximate \"best\" interpretation, and can generate alternate interpretations when necessary. We describe pragmatic marking, a technique which focuses the mapping to produce relevant, yet novel, inferences. We illustrate these techniques via example and evaluate their performance using empirical data and theoretical analysis. analogical processing. However, it has two significant drawbacks: (1) SME constructs all structurally",
+ "neighbors": [
+ 637,
+ 761,
+ 932
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 936,
+ "label": 5,
+ "text": "Title: Context-sensitive attribute estimation in regression \nAbstract: One of key issues in both discrete and continuous class prediction and in machine learning in general seems to be the problem of estimating the quality of attributes. Heuristic measures mostly assume independence of attributes so their use is non-optimal in domains with strong dependencies between attributes. For the same reason they are also mostly unable to recognize context dependent features. Relief and its extension Re-liefF are statistical methods capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view and recognize contextual attributes. After the analysis of ReliefF we have extended it to continuous class problems. Regressional ReliefF (RReliefF) and ReliefF provide a unified view on estimating attribute quality. The experiments show that RReliefF correctly estimates the quality of attributes, recognizes the contextual attributes and can be used for non myopic learning of the regression trees.",
+ "neighbors": [
+ 183,
+ 665,
+ 875,
+ 911,
+ 953
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 937,
+ "label": 1,
+ "text": "Title: Optimization by Means of Genetic Algorithms \nAbstract: Genetic Algorithms (GAs) are powerful heuristic search strategies based upon a simple model of organic evolution. The basic working scheme of GAs as developed by Holland [Hol75] is described within this paper in a formal way, and extensions based upon the second-level learning principle for strategy parameters as introduced in Evolution Strategies (ESs) are proposed. First experimental results concerning this extension of GAs are also reported.",
+ "neighbors": [
+ 91,
+ 237,
+ 610,
+ 807
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 938,
+ "label": 5,
+ "text": "Title: Learning with Abduction \nAbstract: We investigate how abduction and induction can be integrated into a common learning framework through the notion of Abductive Concept Learning (ACL). ACL is an extension of Inductive Logic Programming (ILP) to the case in which both the background and the target theory are abductive logic programs and where an abductive notion of entailment is used as the coverage relation. In this framework, it is then possible to learn with incomplete information about the examples by exploiting the hypothetical reasoning of abduction. The paper presents the basic framework of ACL with its main characteristics and illustrates its potential in addressing several problems in ILP such as learning with incomplete information and multiple predicate learning. An algorithm for ACL is developed by suitably extending the top-down ILP method for concept learning and integrating this with an abductive proof procedure for Abductive Logic Programming (ALP). A prototype system has been developed and applied to learning problems with incomplete information. The particular role of integrity constraints in ACL is investigated showing ACL as a hybrid learning framework that integrates the explanatory (discriminant) and descriptive (characteristic) settings of ILP.",
+ "neighbors": [
+ 486,
+ 1247
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 939,
+ "label": 4,
+ "text": "Title: Markov games as a framework for multi-agent reinforcement learning \nAbstract: In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.",
+ "neighbors": [
+ 80,
+ 300,
+ 449,
+ 689,
+ 811,
+ 868,
+ 920
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 940,
+ "label": 1,
+ "text": "Title: Selection for Wandering Behavior in a Small Robot \nAbstract: We have evolved artificial neural networks to control the wandering behavior of small robots. The task was to touch as many squares in a grid as possible during a fixed period of time. A number of the simulated robots were embodied in small Lego (Trademark) robot, controlled by a Motorola (Trademark) 6811 processor; and their performance was compared to the simulations. We observed that: (a) evolution was an effective means to program control; (b) progress was characterized by sharply stepped periods of improvement, separated by periods of stasis that corresponded to levels of behavioral/computational complexity; and (c) the simulated and realized robots behaved quite similarly, the realized robots in some cases outperforming the simulated ones. Introducing random noise to the simulations improved the fit somewhat (from 0.73 to 0.79). Hybrid simulated/embodied selection regimes for evolutionary robots are discussed. ",
+ "neighbors": [
+ 20,
+ 91,
+ 123,
+ 308,
+ 961
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 941,
+ "label": 1,
+ "text": "Title: The Royal Road for Genetic Algorithms: Fitness Landscapes and GA Performance \nAbstract: Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class (\"Royal Road\" functions), and present some initial experimental results concerning the role of crossover and \"building blocks\" on landscapes constructed from features of this class.",
+ "neighbors": [
+ 91,
+ 632,
+ 746,
+ 1014,
+ 1060,
+ 1145,
+ 1174,
+ 1205
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 942,
+ "label": 0,
+ "text": "Title: On the Usefulness of Re-using Diagnostic Solutions \nAbstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. ",
+ "neighbors": [
+ 463,
+ 681
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 943,
+ "label": 2,
+ "text": "Title: Growing a Hypercubical Output Space in a Self-Organizing Feature Map \nAbstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. ",
+ "neighbors": [
+ 399,
+ 430
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 944,
+ "label": 2,
+ "text": "Title: Pattern analysis and synthesis in attractor neural networks \nAbstract: The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. One popular approach to sensory processing is based on generative models, which assume that sensory input patterns are synthesized from some underlying hidden variables. For example, the sounds of speech can be synthesized from a sequence of phonemes, and images of a face can be synthesized from pose and lighting variables. Hidden variables are useful because they constitute a simpler representation of the variables that are visible in the sensory input. Using a generative model for sensory processing requires a method of pattern analysis. Given a sensory input pattern, analysis is the recovery of the hidden variables from which it was synthesized. In other words, analysis and synthesis are inverses of each other. There are a number of approaches to pattern analysis. In analysis-by-synthesis, the synthetic model is embedded inside a negative feedback loop[1]. Another approach is to construct a separate analysis model[2]. This paper explores a third approach, in which visible-hidden pairs are embedded as attractive fixed points, or attractors, in the state space of a recurrent neural network. The attractors can be regarded as memories stored in the network, and analysis and synthesis as forms of pattern completion by recall of a memory. The approach is illustrated with linear and nonlinear network architectures. In both networks, the synthetic model is linear, as in principal ",
+ "neighbors": [
+ 17,
+ 19,
+ 889
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 945,
+ "label": 6,
+ "text": "Title: A Simpler Look at Consistency \nAbstract: One of the major goals of most early concept learners was to find hypotheses that were perfectly consistent with the training data. It was believed that this goal would indirectly achieve a high degree of predictive accuracy on a set of test data. Later research has partially disproved this belief. However, the issue of consistency has not yet been resolved completely. We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
+ "neighbors": [
+ 745
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 946,
+ "label": 6,
+ "text": "Title: An Efficient Extension to Mixture Techniques for Prediction and Decision Trees \nAbstract: We present a method for maintaining mixtures of prunings of a prediction or decision tree that extends the \"node-based\" prunings of [Bun90, WST95, HS97] to the larger class of edge-based prunings. The method includes an efficient online weight allocation algorithm that can be used for prediction, compression and classification. Although the set of edge-based prunings of a given tree is much larger than that of node-based prunings, our algorithm has similar space and time complexity to that of previous mixture algorithms for trees. Using the general on-line framework of Freund and Schapire [FS97], we prove that our algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function. We also give a similar algorithm for the logarithmic loss function with a corresponding weight allocation algorithm. Finally, we describe experiments comparing node-based and edge-based mixture models for estimating the probability of the next word in English text, which show the ad vantages of edge-based models.",
+ "neighbors": [
+ 255,
+ 330,
+ 586,
+ 724
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 947,
+ "label": 3,
+ "text": "Title: A simulation approach to convergence rates for Markov chain Monte Carlo algorithms \nAbstract: Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis-Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. Rosenthal (1995b) presents a method to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in Rosenthal's theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it can not provide the guarantees offered by analytical proof. Acknowledgements. We thank Brad Carlin for assistance and encouragement. ",
+ "neighbors": [
+ 21,
+ 74,
+ 235,
+ 266,
+ 517,
+ 518,
+ 949,
+ 1066,
+ 1130,
+ 1282
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 948,
+ "label": 1,
+ "text": "Title: A Sampling-Based Heuristic for Tree Search Applied to Grammar Induction \nAbstract: In the field of Operation Research and Artificial Intelligence, several stochastic search algorithms have been designed based on the theory of global random search (Zhigljavsky 1991). Basically, those techniques iteratively sample the search space with respect to a probability distribution which is updated according to the result of previous samples and some predefined strategy. Genetic Algorithms (GAs) (Goldberg 1989) or Greedy Randomized Adaptive Search Procedures (GRASP) (Feo & Resende 1995) are two particular instances of this paradigm. In this paper, we present SAGE, a search algorithm based on the same fundamental mechanisms as those techniques. However, it addresses a class of problems for which it is difficult to design transformation operators to perform local search because of intrinsic constraints in the definition of the problem itself. For those problems, a procedural approach is the natural way to construct solutions, resulting in a state space represented as a tree or a DAG. The aim of this paper is to describe the underlying heuristics used by SAGE to address problems belonging to that class. The performance of SAGE is analyzed on the problem of grammar induction and its successful application to problems from the recent Abbadingo DFA learning competition is presented. ",
+ "neighbors": [
+ 91,
+ 462,
+ 780,
+ 958
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 949,
+ "label": 3,
+ "text": "Title: Analysis of the Gibbs sampler for a model related to James-Stein estimators \nAbstract: Summary. We analyze a hierarchical Bayes model which is related to the usual empirical Bayes formulation of James-Stein estimators. We consider running a Gibbs sampler on this model. Using previous results about convergence rates of Markov chains, we provide rigorous, numerical, reasonable bounds on the running time of the Gibbs sampler, for a suitable range of prior distributions. We apply these results to baseball data from Efron and Morris (1975). For a different range of prior distributions, we prove that the Gibbs sampler will fail to converge, and use this information to prove that in this case the associated posterior distribution is non-normalizable. Acknowledgements. I am very grateful to Jun Liu for suggesting this project, and to Neal Madras for suggesting the use of the Submartingale Convergence Theorem herein. I thank Kate Cowles and Richard Tweedie for helpful conversations, and thank the referees for useful comments. ",
+ "neighbors": [
+ 21,
+ 74,
+ 518,
+ 947,
+ 1066,
+ 1130
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 950,
+ "label": 1,
+ "text": "Title: 3 Representation Issues in Neighborhood Search and Evolutionary Algorithms \nAbstract: Evolutionary Algorithms are often presented as general purpose search methods. Yet, we also know that no search method is better than another over all possible problems and that in fact there is often a good deal of problem specific information involved in the choice of problem representation and search operators. In this paper we explore some very general properties of representations as they relate to neighborhood search methods. In particular, we looked at the expected number of local optima under a neighborhood search operator when averaged overall possible representations. The number of local optima under a neighborhood search operator for standard Binary and standard binary reflected Gray codes is developed and explored as one measure of problem complexity. We also relate number of local optima to another metric, , designed to provide one measure of complexity with respect to a simple genetic algorithm. Choosing a good representation is a vital component of solving any search problem. However, choosing a good representation for a problem is as difficult as choosing a good search algorithm for a problem. Wolpert and Macready's (1995) No Free Lunch (NFL) theorem proves that no search algorithm is better than any other over all possible discrete functions. Radcliffe and Surry (1995) extend these notions to also cover the idea that all representations are equivalent when their behavior is considered on average over all possible functions. To understand these results, we first outline some of the simple assumptions behind this theorem. First, assume the optimization problem is discrete; this describes all combinatorial optimization problems-and really all optimization problems being solved on computers since computers have finite precision. Second, we ignore the fact that we can resample points in the space. The \"No Free Lunch\" result can be stated as follows: ",
+ "neighbors": [
+ 91,
+ 777
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 951,
+ "label": 2,
+ "text": "Title: PREDICTING SUNSPOTS AND EXCHANGE RATES WITH CONNECTIONIST NETWORKS \nAbstract: We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. ",
+ "neighbors": [
+ 86,
+ 113,
+ 471,
+ 613,
+ 1241,
+ 1242,
+ 1304
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 952,
+ "label": 1,
+ "text": "Title: An Analysis of the MAX Problem in Genetic Programming hold only in some cases, in\nAbstract: We present a detailed analysis of the evolution of genetic programming (GP) populations using the problem of finding a program which returns the maximum possible value for a given terminal and function set and a depth limit on the program tree (known as the MAX problem). We confirm the basic message of [ Gathercole and Ross, 1996 ] that crossover together with program size restrictions can be responsible for premature convergence to a suboptimal solution. We show that this can happen even when the population retains a high level of variety and show that in many cases evolution from the sub-optimal solution to the solution is possible if sufficient time is allowed. In both cases theoretical models are presented and compared with actual runs. ",
+ "neighbors": [
+ 682,
+ 707,
+ 1034,
+ 1145,
+ 1178,
+ 1222
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 953,
+ "label": 5,
+ "text": "Title: Prognosing the Survival Time of the Patients with the Anaplastic Thyroid Carcinoma with Machine Learning \nAbstract: Anaplastic thyroid carcinoma is a rare but very aggressive tumor. Many factors that might influence the survival of patients have been suggested. The aim of our study was to determine which of the factors, known at the time of admission to the hospital, might predict survival of patients with anaplastic thyroid carcinoma. Our aim was also to assess the relative importance of the factors and to identify potentially useful decision and regression trees generated by machine learning algorithms. Our study included 126 patients (90 females and 36 males; mean age was 66.7 years) with anaplastic thyroid carcinoma treated at the Institute of Oncology Ljubljana from 1972 to 1992. Patients were classified into categories according to 11 attributes: sex, age, history, physical findings, extent of disease on admission, and tumor morphology. In this paper we compare the machine learning approach with the previous statistical evaluations on the problem (uni-variate and multivariate analysis) and show that it can provide more thorough analysis and improve understanding of the data. ",
+ "neighbors": [
+ 183,
+ 665,
+ 875,
+ 934,
+ 936
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 954,
+ "label": 6,
+ "text": "Title: Machine Learning, 22(1/2/3):95-121, 1996. On the Worst-case Analysis of Temporal-difference Learning Algorithms \nAbstract: We study the behavior of a family of learning algorithms based on Sutton's method of temporal differences. In our on-line learning framework, learning takes place in a sequence of trials, and the goal of the learning algorithm is to estimate a discounted sum of all the reinforcements that will be received in the future. In this setting, we are able to prove general upper bounds on the performance of a slightly modified version of Sutton's so-called TD() algorithm. These bounds are stated in terms of the performance of the best linear predictor on the given training sequence, and are proved without making any statistical assumptions of any kind about the process producing the learner's observed training sequence. We also prove lower bounds on the performance of any algorithm for this learning problem, and give a similar analysis of the closely related problem of learning to predict in a model in which the learner must produce predictions for a whole batch of observations before receiving reinforcement. ",
+ "neighbors": [
+ 327,
+ 426,
+ 774
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 955,
+ "label": 1,
+ "text": "Title: Dynamic Parameter Encoding for Genetic Algorithms \nAbstract: The common use of static binary place-value codes for real-valued parameters of the phenotype in Holland's genetic algorithm (GA) forces either the sacrifice of representational precision for efficiency of search or vice versa. Dynamic Parameter Encoding (DPE) is a mechanism that avoids this dilemma by using convergence statistics derived from the GA population to adaptively control the mapping from fixed-length binary genes to real values. DPE is shown to be empirically effective and amenable to analysis; we explore the problem of premature convergence in GAs through two convergence models.",
+ "neighbors": [
+ 70,
+ 91,
+ 93,
+ 629,
+ 702,
+ 822,
+ 856
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 956,
+ "label": 2,
+ "text": "Title: Improving the Performance of Radial Basis Function Networks by Learning Center Locations \nAbstract: This paper reviews the application of Gibbs sampling to a cointegrated VAR system. Aggregate imports and import prices for Belgium are modelled using two cointegrating relations. Gibbs sampling techniques are used to estimate from a Bayesian perspective the cointegrating relations and their weights in the VAR system. Extensive use of spectral analysis is made to get insight into convergence issues. ",
+ "neighbors": [
+ 357,
+ 496,
+ 917,
+ 1245
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 957,
+ "label": 3,
+ "text": "Title: Decision Analysis by Augmented Probability Simulation \nAbstract: We provide a generic Monte Carlo method to find the alternative of maximum expected utility in a decision analysis. We define an artificial distribution on the product space of alternatives and states, and show that the optimal alternative is the mode of the implied marginal distribution on the alternatives. After drawing a sample from the artificial distribution, we may use exploratory data analysis tools to approximately identify the optimal alternative. We illustrate our method for some important types of influence diagrams. (Decision Analysis, Influence Diagrams, Markov chain Monte Carlo, Simulation) ",
+ "neighbors": [
+ 21
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 958,
+ "label": 1,
+ "text": "Title: A Stochastic Search Approach to Grammar Induction \nAbstract: This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rod-ney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE.",
+ "neighbors": [
+ 91,
+ 462,
+ 702,
+ 890,
+ 948
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 959,
+ "label": 0,
+ "text": "Title: Supporting Conversational Case-Based Reasoning in an Integrated Reasoning Framework Conversational Case-Based Reasoning \nAbstract: Conversational case-based reasoning (CCBR) has been successfully used to assist in case retrieval tasks. However, behavioral limitations of CCBR motivate the search for integrations with other reasoning approaches. This paper briefly describes our group's ongoing efforts towards enhancing the inferencing behaviors of a conversational case-based reasoning development tool named NaCoDAE. In particular, we focus on integrating NaCoDAE with machine learning, model-based reasoning, and generative planning modules. This paper defines CCBR, briefly summarizes the integrations, and explains how they enhance the overall system. Our research focuses on enhancing the performance of conversational case-based reasoning (CCBR) systems (Aha & Breslow, 1997). CCBR is a form of case-based reasoning where users initiate problem solving conversations by entering an initial problem description in natural language text. This text is assumed to be a partial rather than a complete problem description. The CCBR system then assists in eliciting refinements of this description and in suggesting solutions. Its primary purpose is to provide a focus of attention for the user so as to quickly provide a solution(s) for their problem. Figure 1 summarizes the CCBR problem solving cycle. Cases in a CCBR library have three components: ",
+ "neighbors": [
+ 564,
+ 571
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 960,
+ "label": 1,
+ "text": "Title: A Simulation of Adaptive Agents in a Hostile Environment \nAbstract: In this paper we use the genetic programming technique to evolve programs to control an autonomous agent capable of learning how to survive in a hostile environment. In order to facilitate this goal, agents are run through random environment configurations. Randomly generated programs, which control the interaction of the agent with its environment, are recombined to form better programs. Each generation of the population of agents is placed into the Simulator with the ultimate goal of producing an agent capable of surviving any environment. The environment that an agent is presented consists of other agents, mines, and energy. The goal of this research is to construct a program which when executed will allow an agent (or agents) to correctly sense, and mark, the presence of items (energy and mines) in any environment. The Simulator determines the raw fitness of each agent by interpreting the associated program. General programs are evolved to solve this problem. Different environmental setups are presented to show the generality of the solution. These environments include one agent in a fixed environment, one agent in a fluctuating environment, and multiple agents in a fluctuating environment cooperating together. The genetic programming technique was extremely successful. The average fitness per generation in all three environments tested showed steady improvement. Programs were successfully generated that enabled an agent to handle any possible environment. ",
+ "neighbors": [
+ 218,
+ 234,
+ 664
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 961,
+ "label": 1,
+ "text": "Title: Evolving nonTrivial Behaviors on Real Robots: an Autonomous Robot that Picks up Objects \nAbstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. ",
+ "neighbors": [
+ 123,
+ 308,
+ 940
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 962,
+ "label": 4,
+ "text": "Title: Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems \nAbstract: Increasing attention has been paid to reinforcement learning algorithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo policy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that performs considerably better than any deterministic policy. Although the space of stochastic policies is continuous|even for a discrete action space|our algorithm is computationally tractable. ",
+ "neighbors": [
+ 327,
+ 350,
+ 426
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 963,
+ "label": 2,
+ "text": "Title: DISCRETE-TIME TRANSITIVITY AND ACCESSIBILITY: ANALYTIC SYSTEMS 1 \nAbstract: This paper studies the problem, and establishes the desired implication for analytic systems in several cases: (i) compact state space, (ii) under a Poisson stability condition, and (iii) in a generic sense. In addition, the paper studies accessibility properties of the \"control sets\" recently introduced in the context of dynamical systems studies. Finally, various examples and counterexamples are provided relating the various Lie algebras introduced in past work. ",
+ "neighbors": [
+ 1050
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 964,
+ "label": 6,
+ "text": "Title: A Quantum Computational Learning Algorithm \nAbstract: An interesting classical result due to Jackson allows polynomial-time learning of the function class DNF using membership queries. Since in most practical learning situations access to a membership oracle is unrealistic, this paper explores the possibility that quantum computation might allow a learning algorithm for DNF that relies only on example queries. A natural extension of Fourier-based learning into the quantum domain is presented. The algorithm requires only an example oracle, and it runs in O( 2 n ) time, a result that appears to be classically impossible. The algorithm is unique among quantum algorithms in that it does not assume a priori knowledge of a function and does not operate on a superposition that includes all possible basis states. ",
+ "neighbors": [
+ 257
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 965,
+ "label": 3,
+ "text": "Title: Algebraic Techniques for Efficient Inference in Bayesian Networks \nAbstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. These algorithms use graph-theoretic techniques to analyze and exploit network topology. In this paper, we examine the problem of efficient probabilistic inference in a belief network as a combinatorial optimization problem, that of finding an optimal factoring given an algebraic expression over a set of probability distributions. We define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and present simple, easily implemented algorithms with excellent performance. We also show how use of an algebraic perspective permits significant extension to the belief net representation. ",
+ "neighbors": [
+ 1137
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 966,
+ "label": 4,
+ "text": "Title: The Neural Network House: An Overview \nAbstract: Typical home comfort systems utilize only rudimentary forms of energy management and conservation. The most sophisticated technology in common use today is an automatic setback thermostat. Tremendous potential remains for improving the efficiency of electric and gas usage. However, home residents who are ignorant of the physics of energy utilization cannot design environmental control strategies, but neither can energy management experts who are ignorant of the behavior patterns of the inhabitants. Adaptive control seems the only alternative. We have begun building an adaptive control system that can infer appropriate rules of operation for home comfort systems based on the lifestyle of the inhabitants and energy conservation goals. Recent research has demonstrated the potential of neural networks for intelligent control. We are constructing a prototype control system in an actual residence using neural network reinforcement learning and prediction techniques. The residence is equipped with sensors to provide information about environmental conditions (e.g., temperatures, ambient lighting level, sound and motion in each room) and actuators to control the gas furnace, electric space heaters, gas hot water heater, lighting, motorized blinds, ceiling fans, and dampers in the heating ducts. This paper presents an overview of the project as it now stands.",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 967,
+ "label": 1,
+ "text": "Title: Soft Computing: the Convergence of Emerging Reasoning Technologies \nAbstract: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms.",
+ "neighbors": [
+ 93,
+ 430,
+ 1313
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 968,
+ "label": 3,
+ "text": "Title: Belief Maintenance in Bayesian Networks \nAbstract: Two issues of an intelligent navigation robot have been addressed in this work. First is the robot's ability to learn a representation of the local environment and use this representation to identify which local environment it is in. This is done by first extracting features from the sensors which are more informative than just distances of obstacles in various directions. Using these features a reduced ring representation (RRR) of the local environment is derived. As the robot navigates, it learns the RRR signatures of all the new environment types it encounters. For purpose of identification, a ring matching criteria is proposed where the robot tries to match the RRR from the sensory input to one of the RRRs in its library. The second issue addressed is that of learning hill climbing control laws in the local environments. Unlike conventional neuro-controllers, a reinforcement learning framework, where the robot first learns a model of the environment and then learns the control law in terms of a neural network is proposed here. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken. Three key results shown in this work are that (1) The robot is able to build its library of RRR signatures perfectly even with significant sensor noise for eight different local environ-mets, (2) It was able to identify its local environment with an accuracy of more than 96%, once the library is build, and (3) the robot was able to learn adequate hill climbing control laws which take it to the distinctive state of the local environment for five different environment types.",
+ "neighbors": [
+ 1350,
+ 1351
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 969,
+ "label": 2,
+ "text": "Title: A Brief History of Connectionism \nAbstract: Connectionist research is firmly established within the scientific community, especially within the multi-disciplinary field of cognitive science. This diversity, however, has created an environment which makes it difficult for connectionist researchers to remain aware of recent advances in the field, let alone understand how the field has developed. This paper attempts to address this problem by providing a brief guide to connectionist research. The paper begins by defining the basic tenets of connectionism. Next, the development of connectionist research is traced, commencing with connectionism's philosophical predecessors, moving to early psychological and neuropsychological influences, followed by the mathematical and computing contributions to connectionist research. Current research is then reviewed, focusing specifically on the different types of network architectures and learning rules in use. The paper concludes by suggesting that neural network research|at least in cognitive science|should move towards models that incorporate the relevant functional principles inherent in neurobiological systems. ",
+ "neighbors": [
+ 231,
+ 357,
+ 375,
+ 430,
+ 1317
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 970,
+ "label": 3,
+ "text": "Title: Some remarks on Scheiblechner's treatment of ISOP models. \nAbstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 971,
+ "label": 3,
+ "text": "Title: A Characterization of Monotone Unidimensional Latent Variable Models \nAbstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 972,
+ "label": 2,
+ "text": "Title: Canonical Momenta Indicators of Financial Markets and Neocortical EEG \nAbstract: A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. ",
+ "neighbors": [
+ 1291
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 973,
+ "label": 2,
+ "text": "Title: Networks of Spiking Neurons: The Third Generation of Neural Network Models \nAbstract: The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neuro biology.",
+ "neighbors": [
+ 565,
+ 1025,
+ 1058,
+ 1322
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 974,
+ "label": 5,
+ "text": "Title: Extending Theory Refinement to M-of-N Rules \nAbstract: In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the Either propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend Either to refine M-of-N rules. The resulting algorithm, Neither (New Either), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the M-of-N format. To demonstrate the advantages of Neither, we present experimental results from two real-world domains. ",
+ "neighbors": [
+ 72,
+ 892,
+ 1290
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 975,
+ "label": 5,
+ "text": "Title: Learning Singly-Recursive Relations from Small Datasets \nAbstract: The inductive logic programming system LOPSTER was created to demonstrate the advantage of basing induction on logical implication rather than -subsumption. LOPSTER's sub-unification procedures allow it to induce recursive relations using a minimum number of examples, whereas inductive logic programming algorithms based on -subsumption require many more examples to solve induction tasks. However, LOPSTER's input examples must be carefully chosen; they must be along the same inverse resolution path. We hypothesize that an extension of LOPSTER can efficiently induce recursive relations without this requirement. We introduce a generalization of LOPSTER named CRUSTACEAN that has this capability and empirically evaluate its ability to induce recursive relations. ",
+ "neighbors": [
+ 995
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 976,
+ "label": 4,
+ "text": "Title: Least-Squares Temporal Difference Learning \nAbstract: Submitted to NIPS-98 TD() is a popular family of algorithms for approximate policy evaluation in large MDPs. TD() works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto [5] eliminates all stepsize parameters and improves data efficiency. This paper extends Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique.",
+ "neighbors": [
+ 170,
+ 327,
+ 328
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 977,
+ "label": 3,
+ "text": "Title: Importance Sampling \nAbstract: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. ",
+ "neighbors": [
+ 25,
+ 1212
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 978,
+ "label": 1,
+ "text": "Title: Genetic Programming and Redundancy \nAbstract: The Genetic Programming optimization method (GP) elaborated by John Koza [ Koza, 1992 ] is a variant of Genetic Algorithms. The search space of the problem domain consists of computer programs represented as parse trees, and the crossover operator is realized by an exchange of subtrees. Empirical analyses show that large parts of those trees are never used or evaluated which means that these parts of the trees are irrelevant for the solution or redundant. This paper is concerned with the identification of the redundancy occuring in GP. It starts with a mathematical description of the behavior of GP and the conclusions drawn from that description among others explain the \"size problem\" which denotes the phenomenon that the average size of trees in the population grows with time.",
+ "neighbors": [
+ 28,
+ 91,
+ 218,
+ 490,
+ 667,
+ 1347,
+ 1353
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 979,
+ "label": 2,
+ "text": "Title: Path-integral evolution of chaos embedded in noise: Duffing neocortical analog \nAbstract: A two dimensional time-dependent Duffing oscillator model of macroscopic neocortex exhibits chaos for some ranges of parameters. We embed this model in moderate noise, typical of the context presented in real neocortex, using PATHINT, a non-Monte-Carlo path-integral algorithm that is particularly adept in handling nonlinear Fokker-Planck systems. This approach shows promise to investigate whether chaos in neocortex, as predicted by such models, can survive in noisy contexts. ",
+ "neighbors": [
+ 983,
+ 1146
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 980,
+ "label": 2,
+ "text": "Title: Pruning with generalization based weight saliencies: flOBD, flOBS \nAbstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series.",
+ "neighbors": [
+ 1264
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 981,
+ "label": 1,
+ "text": "Title: Using Genetic Programming to Evolve Board Evaluation Functions \nAbstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series.",
+ "neighbors": [
+ 10,
+ 234,
+ 300,
+ 327,
+ 1206
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 982,
+ "label": 2,
+ "text": "Title: STATISTICAL MECHANICS OF COMBAT WITH HUMAN FACTORS \nAbstract: This highly interdisciplinary project extends previous work in combat modeling and in control-theoretic descriptions of decision-making human factors in complex activities. A previous paper has established the first theory of the statistical mechanics of combat (SMC), developed using modern methods of statistical mechanics, baselined to empirical data gleaned from the National Training Center (NTC). This previous project has also established a JANUS(T)-NTC computer simulation/wargame of NTC, providing a statistical ``what-if '' capability for NTC scenarios. This mathematical formulation is ripe for control-theoretic extension to include human factors, a methodology previously developed in the context of teleoperated vehicles. Similar NTC scenarios differing at crucial decision points will be used for data to model the inuence of decision making on combat. The results may then be used to improve present human factors and C 2 algorithms in computer simulations/wargames. Our approach is to ``subordinate'' the SMC nonlinear stochastic equations, fitted to NTC scenarios, to establish the zeroth order description of that combat. In practice, an equivalent mathematical-physics representation is used, more suitable for numerical and formal work, i.e., a Lagrangian representation. Theoretically, these equations are nested within a larger set of nonlinear stochastic operator-equations which include C 3 human factors, e.g., supervisory decisions. In this study, we propose to perturb this operator theory about the SMC zeroth order set of equations. Then, subsets of scenarios fit to zeroth order, originally considered to be similarly degenerate, can be further split perturbatively to distinguish C 3 decision-making inuences. New methods of Very Fast Simulated Re-Annealing (VFSR), developed in the previous project, will be used for fitting these models to empirical data. ",
+ "neighbors": [
+ 983,
+ 1146
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 983,
+ "label": 2,
+ "text": "Title: Application of statistical mechanics methodol- ogy to term-structure bond-pricing models, Mathl. Comput. Modelling Application of\nAbstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent ",
+ "neighbors": [
+ 979,
+ 982,
+ 1146,
+ 1291
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 984,
+ "label": 1,
+ "text": "Title: Evaluating and Improving Steady State Evolutionary Algorithms on Constraint Satisfaction Problems \nAbstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent ",
+ "neighbors": [
+ 482,
+ 590
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 985,
+ "label": 2,
+ "text": "Title: Toward a unified theory of spatiotemporal processing in the retina \nAbstract: Traditional evolutionary optimization algorithms assume a static evaluation function, according to which solutions are evolved. Incremental evolution is an approach through which a dynamic evaluation function is scaled over time in order to improve the performance of evolutionary optimization. In this paper, we present empirical results that demonstrate the effectiveness of this approach for genetic programming. Using two domains, a two-agent pursuit-evasion game and the Tracker [6] trail-following task, we demonstrate that incremental evolution is most successful when applied near the beginning of an evolutionary run. We also show that incremental evolution can be successful when the intermediate evaluation functions are more difficult than the target evaluation function, as well as when they are easier than the target function. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 986,
+ "label": 1,
+ "text": "Title: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes \nAbstract: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms. ",
+ "neighbors": [
+ 91,
+ 419,
+ 794
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 987,
+ "label": 3,
+ "text": "Title: Rational Belief Revision (Preliminary Report) \nAbstract: Theories of rational belief revision recently proposed by Alchourron, Gardenfors, Makin-son, and Nebel illuminate many important issues but impose unnecessarily strong standards for correct revisions and make strong assumptions about what information is available to guide revisions. We reconstruct these theories according to an economic standard of rationality in which preferences are used to select among alternative possible revisions. By permitting multiple partial specifications of preferences in ways closely related to preference-based nonmonotonic logics, the reconstructed theory employs information closer to that available in practice and offers more flexible ways of selecting revisions. We formally compare this new conception of rational belief revision with the original theories, adapt results about universal default theories to prove that there is unlikely to be any universal method of rational belief revision, and examine formally how different limitations on rationality affect belief revision.",
+ "neighbors": [
+ 196,
+ 1030,
+ 1031,
+ 1069,
+ 1077
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 988,
+ "label": 3,
+ "text": "Title: Toward a Market Model for Bayesian Inference \nAbstract: We present a methodology for representing probabilistic relationships in a general-equilibrium economic model. Specifically, we define a precise mapping from a Bayesian network with binary nodes to a market price system where consumers and producers trade in uncertain propositions. We demonstrate the correspondence between the equilibrium prices of goods in this economy and the probabilities represented by the Bayesian network. A computational market model such as this may provide a useful framework for investigations of belief aggregation, distributed probabilistic inference, resource allocation under uncertainty, and other problems of de centralized uncertainty.",
+ "neighbors": [
+ 1097
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 989,
+ "label": 0,
+ "text": "Title: CHARADE: a Platform for Emergencies Management Systems \nAbstract: This paper describe the functional architecture of CHARADE a software platform devoted to the development of a new generation of intelligent environmental decision support systems. The CHARADE platform is based on the a task-oriented approach to system design and on the exploitation of a new architecture for problem solving, that integrates case-based reasoning and constraint reasoning. The platform is developed in an objectoriented environment and upon that a demonstrator will be developed for managing first intervention attack to forest fires.",
+ "neighbors": [
+ 1188
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 990,
+ "label": 2,
+ "text": "Title: GENERALIZATION PERFORMANCE OF BACKPROPAGATION LEARNING ON A SYLLABIFICATION TASK \nAbstract: We investigated the generalization capabilities of backpropagation learning in feed-forward and recurrent feed-forward connectionist networks on the assignment of syllable boundaries to orthographic representations in Dutch (hyphenation). This is a difficult task because phonological and morphological constraints interact, leading to ambiguity in the input patterns. We compared the results to different symbolic pattern matching approaches, and to an exemplar-based generalization scheme, related to a k-nearest neighbour approach, but using a similarity metric weighed by the relative information entropy of positions in the training patterns. Our results indicate that the generalization performance of backpropagation learning for this task is not better than that of the best symbolic pattern matching approaches, and of exemplar-based generalization. ",
+ "neighbors": [
+ 456,
+ 653,
+ 787
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 991,
+ "label": 2,
+ "text": "Title: Pruning Strategies for the MTiling Constructive Learning Algorithm \nAbstract: We present a framework for incorporating pruning strategies in the MTiling constructive neural network learning algorithm. Pruning involves elimination of redundant elements (connection weights or neurons) from a network and is of considerable practical interest. We describe three elementary sensitivity based strategies for pruning neurons. Experimental results demonstrate a moderate to significant reduction in the network size without compromising the network's generalization performance. ",
+ "neighbors": [
+ 288,
+ 1235
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 992,
+ "label": 2,
+ "text": "Title: Independent Component Analysis by General Non-linear Hebbian-like Learning Rules \nAbstract: A number of neural learning rules have been recently proposed for Independent Component Analysis (ICA). The rules are usually derived from information-theoretic criteria such as maximum entropy or minimum mutual information. In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-linear function can be used in the learning rule, provided only that the sign of the Hebbian/anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria.",
+ "neighbors": [
+ 331,
+ 335,
+ 483,
+ 609
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 993,
+ "label": 2,
+ "text": "Title: Submitted to Circuits, Systems and Signal Processing Neural Network Constructive Algorithms: Trading Generalization for Learning Efficiency? \nAbstract: There are currently several types of constructive, or growth, algorithms available for training a feed-forward neural network. This paper describes and explains the main ones, using a fundamental approach to the multi-layer perceptron problem-solving mechanisms. The claimed convergence properties of the algorithms are verified using just two mapping theorems, which consequently enables all the algorithms to be unified under a basic mechanism. The algorithms are compared and contrasted and the deficiencies of some highlighted. The fundamental reasons for the actual success of these algorithms are extracted, and used to suggest where they might most fruitfully be applied. A suspicion that they are not a panacea for all current neural network difficulties, and that one must somewhere along the line pay for the learning efficiency they promise, is developed into an argument that their generalization abilities will lie on average below that of back-propagation. ",
+ "neighbors": [
+ 135
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 994,
+ "label": 4,
+ "text": "Title: Generalized Prioritized Sweeping \nAbstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping. ",
+ "neighbors": [
+ 321,
+ 322,
+ 328,
+ 1045,
+ 1269
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 995,
+ "label": 5,
+ "text": "Title: The Difficulties of Learning Logic Programs with Cut \nAbstract: As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.",
+ "neighbors": [
+ 124,
+ 975,
+ 1303
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 996,
+ "label": 6,
+ "text": "Title: The Complexity of Theory Revision \nAbstract: A knowledge-based system uses its database (a.k.a. its \"theory\") to produce answers to the queries it receives. Unfortunately, these answers may be incorrect if the underlying theory is faulty. Standard \"theory revision\" systems use a given set of \"labeled queries\" (each a query paired with its correct answer) to transform the given theory, by adding and/or deleting either rules and/or antecedents, into a related theory that is as accurate as possible. After formally defining the theory revision task and bounding its sample complexity, this paper addresses the task's computational complexity. It first proves that, unless P = N P , no polynomial time algorithm can identify the optimal theory, even given the exact distribution of queries, except in the most trivial of situations. It also shows that, except in such trivial situations, no polynomial-time algorithm can produce a theory whose inaccuracy is even close (i.e., within a particular polynomial factor) to optimal. These results justify the standard practice of hill-climbing to a locally-optimal theory, based on a given set of labeled sam ples.",
+ "neighbors": [
+ 1303
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 997,
+ "label": 2,
+ "text": "Title: GUESSING CAN OUTPERFORM MANY LONG TIME LAG ALGORITHMS \nAbstract: Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alternative methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing. ",
+ "neighbors": [
+ 38,
+ 561
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 998,
+ "label": 6,
+ "text": "Title: Efficient Algorithms for Inverting Evolution \nAbstract: Evolution is a stochastic process which operates on the DNA of species. The evolutionary process leaves tell-tale signs in the DNA which can be used to construct phylogenies, or evolutionary trees, for a set of species. Maximum Likelihood Estimations (MLE) methods seek the evolutionary tree which is most likely to have produced the DNA under consideration. While these methods are widely accepted and intellectually satisfying, they have been computationally intractable. In this paper, we address the intractability of MLE methods as follows. We introduce a metric on stochastic process models of evolution. We show that this metric is meaningful by proving that in order for any algorithm to distinguish between two stochatic models that are close according to this metric, it needs to be given a lot of observations. We complement this result with a simple and efficient algorithm for inverting the stochastic process of evolution, that is, for building the tree from observations on the DNA of the species. Our result can be viewed as a result on the PAC-learnability of the class of distributions produced by tree-like processes. Though there have been many heuristics suggested for this problem, our algorithm is the first one with a guaranteed convergence rate, and further, this rate is within a polynomial of the lower-bound rate we establish. Ours is also the the first polynomial-time algorithm which is guaranteed to converge at all to the correct tree. ",
+ "neighbors": [
+ 172,
+ 333,
+ 1056,
+ 1104,
+ 1164
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 999,
+ "label": 1,
+ "text": "Title: Tackling the Boolean Even N Parity Problem with Genetic Programming and Limited-Error Fitness standard GP\nAbstract: This paper presents Limited Error Fitness (LEF), a modification to the standard supervised learning approach in Genetic Programming (GP), in which an individual's fitness score is based on how many cases remain uncovered in the ordered training set after the individual exceeds an error limit. The training set order and the error limit are both altered dynamically in response to the performance of the fittest individual in the previous generation. ",
+ "neighbors": [
+ 28,
+ 234,
+ 1000,
+ 1206
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1000,
+ "label": 1,
+ "text": "Title: Small Populations over Many Generations can beat Large Populations over Few Generations in Genetic Programming \nAbstract: This paper looks at the use of small populations in Genetic Programming (GP), where the trend in the literature appears to be towards using as large a population as possible, which requires more memory resources and CPU-usage is less efficient. Dynamic Subset Selection (DSS) and Limited Error Fitness (LEF) are two different, adaptive variations of the standard supervised learning method used in GP. This paper compares the performance of GP, GP+DSS, and GP+LEF, on a 958 case classification problem, using a small population size of 50. A similar comparison between GP and GP+DSS is done on a larger and messier 3772 case classification problem. For both problems, GP+DSS with the small population size consistently produces a better answer using fewer tree evaluations than other runs using much larger populations. Even standard GP can be seen to perform well with the much smaller population size, indicating that it is certainly worth an exploratory run or three with a small population size before assuming that a large population size is necessary. It is an interesting notion that smaller can mean faster and better. ",
+ "neighbors": [
+ 234,
+ 999,
+ 1206
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1001,
+ "label": 4,
+ "text": "Title: Two Methods for Hierarchy Learning in Reinforcement Environments \nAbstract: This paper describes two methods for hierarchically organizing temporal behaviors. The first is more intuitive: grouping together common sequences of events into single units so that they may be treated as individual behaviors. This system immediately encounters problems, however, because the units are binary, meaning the behaviors must execute completely or not at all, and this hinders the construction of good training algorithms. The system also runs into difficulty when more than one unit is (or should be) active at the same time. The second system is a hierarchy of transition values. This hierarchy dynamically modifies the values that specify the degree to which one unit should follow another. These values are continuous, allowing the use of gradient descent during learning. Furthermore, many units are active at the same time as part of the system's normal functionings.",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1002,
+ "label": 5,
+ "text": "Title: Profile-Driven Instruction Level Parallel Scheduling with Application to Super Blocks \nAbstract: Code scheduling to exploit instruction level parallelism (ILP) is a critical problem in compiler optimization research, in light of the increased use of long-instruction-word machines. Unfortunately, optimum scheduling is com-putationally intractable, and one must resort to carefully crafted heuristics in practice. If the scope of application of a scheduling heuristic is limited to basic blocks, considerable performance loss may be incurred at block boundaries. To overcome this obstacle, basic blocks can be coalesced across branches to form larger regions such as super blocks. In the literature, these regions are typically scheduled using algorithms that are either oblivious to profile information (under the assumption that the process of forming the region has fully utilized the profile information), or use the profile information as an addendum to classical scheduling techniques. We believe that even for the simple case of linear code regions such as super blocks, additional performance improvement can be gained by utilizing the profile information in scheduling as well. We propose a general paradigm for converting any profile-insensitive list sched-uler to a profile-sensitive scheduler. Our technique is developed via a theoretical analysis of a simplified abstract model of the general problem of profile-driven scheduling over any acyclic code region, yielding a scoring measure for ranking branch instructions. The ranking digests the profile information and has the useful property that scheduling with respect to rank is provably good for minimizing the expected completion time of the region, within the limits of the abstraction. While the ranking scheme is computation-ally intractable in the most general case, it is practicable for super blocks and suggests the heuristic that we present in this paper for profile-driven scheduling of super blocks. Experiments show that our heuristic offers substantial performance improvement over prior methods on a range of integer benchmarks and several machine models. ",
+ "neighbors": [
+ 1136
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1003,
+ "label": 3,
+ "text": "Title: On Sequential Simulation-Based Methods for Bayesian Filtering \nAbstract: In this report, we present an overview of sequential simulation-based methods for Bayesian filtering of nonlinear and non-Gaussian dynamic models. It includes in a general framework numerous methods proposed independently in various areas of science and proposes some original developments. ",
+ "neighbors": [
+ 55
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1004,
+ "label": 0,
+ "text": "Title: Case Retrieval Nets: Basic Ideas and Extensions \nAbstract: An efficient retrieval of a relatively small number of relevant cases from a huge case base is a crucial subtask of Case-Based Reasoning. In this article, we present Case Retrieval Nets (CRNs), a memory model that has recently been developed for this task. The main idea is to apply a spreading activation process to a net-like case memory in order to retrieve cases being similar to a posed query case. We summarize the basic ideas of CRNs, suggest some useful extensions, and present some initial experimental results which suggest that CRNs can successfully handle case bases larger than considered usually in the CBR community. ",
+ "neighbors": [
+ 41,
+ 1005,
+ 1010,
+ 1116,
+ 1196,
+ 1268
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1005,
+ "label": 0,
+ "text": "Title: Applying Case Retrieval Nets to Diagnostic Tasks in Technical Domains \nAbstract: This paper presents Objectdirected Case Retrieval Nets, a memory model developed for an application of Case-Based Reasoning to the task of technical diagnosis. The key idea is to store cases, i.e. observed symptoms and diagnoses, in a network and to enhance this network with an object model encoding knowledge about the devices in the application domain. ",
+ "neighbors": [
+ 41,
+ 1004,
+ 1010,
+ 1116,
+ 1196,
+ 1268
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1006,
+ "label": 6,
+ "text": "Title: NP-Completeness of Minimum Rule Sets \nAbstract: Rule induction systems seek to generate rule sets which are optimal in the complexity of the rule set. This paper develops a formal proof of the NP-Completeness of the problem of generating the simplest rule set (MIN RS) which accurately predicts examples in the training set for a particular type of generalization algorithm algorithm and complexity measure. The proof is then informally extended to cover a broader spectrum of complexity measures and learning algorithms. ",
+ "neighbors": [
+ 1286
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1007,
+ "label": 4,
+ "text": "Title: Self-Improving Factory Simulation using Continuous-time Average-Reward Reinforcement Learning \nAbstract: Many factory optimization problems, from inventory control to scheduling and reliability, can be formulated as continuous-time Markov decision processes. A primary goal in such problems is to find a gain-optimal policy that minimizes the long-run average cost. This paper describes a new average-reward algorithm called SMART for finding gain-optimal policies in continuous time semi-Markov decision processes. The paper presents a detailed experimental study of SMART on a large unreliable production inventory problem. SMART outperforms two well-known reliability heuristics from industrial engineering. A key feature of this study is the integration of the reinforcement learning algorithm directly into two commercial discrete-event simulation packages, ARENA and CSIM, paving the way for this approach to be applied to many other factory optimization problems for which there already exist simulation models.",
+ "neighbors": [
+ 269,
+ 315,
+ 320,
+ 327,
+ 362
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1008,
+ "label": 6,
+ "text": "Title: Continuous-valued Xof-N Attributes Versus Nominal Xof-N Attributes for Constructive Induction: A Case Study \nAbstract: An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. In this paper, we explore the characteristics and performance of continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction. Nominal Xof-Ns are more representationally powerful than continuous-valued Xof-Ns, but the former suffer the \"fragmentation\" problem, although some mechanisms such as subsetting can help to solve the problem. Two approaches to constructive induction using continuous-valued Xof-Ns are described. Continuous-valued Xof-Ns perform better than nominal ones on domains that need Xof-Ns with only one cut point. On domains that need Xof-N representations with more than one cut point, nominal Xof-Ns perform better than continuous-valued ones. Experimental results on a set of artificial and real-world domains support these statements. ",
+ "neighbors": [
+ 892,
+ 917,
+ 1009,
+ 1057,
+ 1340
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1009,
+ "label": 6,
+ "text": "Title: Effects of Different Types of New Attribute on Constructive Induction \nAbstract: This paper studies the effects on decision tree learning of constructing four types of attribute (conjunctive, disjunctive, Mof-N, and Xof-N representations). To reduce effects of other factors such as tree learning methods, new attribute search strategies, evaluation functions, and stopping criteria, a single tree learning algorithm is developed. With different option settings, it can construct four different types of new attribute, but all other factors are fixed. The study reveals that conjunctive and disjunctive representations have very similar performance in terms of prediction accuracy and theory complexity on a variety of concepts. Moreover, the study demonstrates that the stronger representation power of Mof-N than conjunction and disjunction and the stronger representation power of Xof-N than these three types of new attribute can be reflected in the performance of decision tree learning. ",
+ "neighbors": [
+ 892,
+ 917,
+ 1008,
+ 1057
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1010,
+ "label": 0,
+ "text": "Title: An Investigation of Marker-Passing Algorithms for Analogue Retrieval \nAbstract: If analogy and case-based reasoning systems are to scale up to very large case bases, it is important to analyze the various methods used for retrieving analogues to identify the features of the problem for which they are appropriate. This paper reports on one such analysis, a comparison of retrieval by marker passing or spreading activation in a semantic network with Knowledge-Directed Spreading Activation, a method developed to be well-suited for retrieving semantically distant analogues from a large knowledge base. The analysis has two complementary components: (1) a theoretical model of the retrieval time based on a number of problem characteristics, and (2) experiments showing how the retrieval time of the approaches varies with the knowledge base size. These two components, taken together, suggest that KDSA is more likely than SA to be able to scale up to retrieval in large knowledge bases.",
+ "neighbors": [
+ 1004,
+ 1005,
+ 1116
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1011,
+ "label": 2,
+ "text": "Title: DNA Sequence Classification Using Compression-Based Induction \nAbstract: DIMACS Technical Report 95-04 April 1995 ",
+ "neighbors": [
+ 1113
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1012,
+ "label": 2,
+ "text": "Title: A Model of Rapid Memory Formation in the Hippocampal System \nAbstract: Our ability to remember events and situations in our daily life demonstrates our ability to rapidly acquire new memories. There is a broad consensus that the hippocampal system (HS) plays a critical role in the formation and retrieval of such memories. A computational model is described that demonstrates how the HS may rapidly transform a transient pattern of activity representing an event or a situation into a persistent structural encoding via long-term potentiation and long-term depression. ",
+ "neighbors": [
+ 1183
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1013,
+ "label": 3,
+ "text": "Title: Convergence in Norm for Alternating Expectation-Maximization (EM) Type Algorithms 1 \nAbstract: We provide a sufficient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng and Rubin's ECM and Fessler and Hero's SAGE. The condition for monotone convergence can be used to establish norms under which the distance between successive iterates and the limit point of the EM-type algorithm approaches zero monotonically. For illustration, we apply our results to estimation of Poisson rate parameters in emission tomography and establish that in the final iterations the logarithm of the EM iterates converge monotonically in a weighted Euclidean norm. ",
+ "neighbors": [
+ 1244
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1014,
+ "label": 1,
+ "text": "Title: Modeling Building-Block Interdependency Dynamical and Evolutionary Machine Organization Group \nAbstract: The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA test problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various test problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-IFF) as a canonical example. We present some empirical results of GAs on H-IFF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests. ",
+ "neighbors": [
+ 91,
+ 707,
+ 941
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1015,
+ "label": 2,
+ "text": "Title: Achieving Super Computer Performance with a DSP Array Processor \nAbstract: The MUSIC system (MUlti Signal processor system with Intelligent Communication) is a parallel distributed memory architecture based on digital signal processors (DSP). A system with 60 processor elements is operational. It has a peak performance of 3.8 GFlops, an electrical power consumption of less than 800 W (including forced air cooling) and fits into a 19\" rack. Two applications (the back-propagation algorithm for neural net learning and molecular dynamics simulations) run about 6 times faster than on a CRAY Y-MP and 2 times faster than on a NEC SX-3. A sustained performance of more than 1 GFlops is reached. The selling price of such a system would be in the range of about 300'000 US$. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1016,
+ "label": 2,
+ "text": "Title: DYNAMICAL BEHAVIOR OF ARTIFICIAL NEURAL NETWORKS WITH RANDOM WEIGHTS \nAbstract: In this paper we report a Monte Carlo study of the dynamics of large untrained, feedforward, neural networks with randomly chosen weights and feedback. The analysis consists of looking at the percent of the systems that exhibit chaos, the distrubution of largest Lyapunov exponents, and the distrubution of correlation dimensions. As the systems become more complex (increasing inputs and neurons), the probability of chaos approaches unity. The correlation dimension is typically much smaller than the system dimension. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1017,
+ "label": 0,
+ "text": "Title: Learning High Utility Rules by Incorporating Search Control Guidance Committee \nAbstract: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates). ",
+ "neighbors": [
+ 144,
+ 416
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1018,
+ "label": 2,
+ "text": "Title: Rochester Connectionist Simulator \nAbstract: Specifying, constructing and simulating structured connectionist networks requires significant programming effort. System tools can greatly reduce the effort required, and by providing a conceptual structure within which to work, make large and complex network simulations possible. The Rochester Connectionist Simulator is a system tool designed to aid specification, construction and simulation of connectionist networks. This report describes this tool in detail: the facilities provided and how to use them, as well as details of the implementation. Through this we hope not only to make designing and verifying connectionist networks easier, but also to encourage the development and refinement of connectionist research tools themselves. ",
+ "neighbors": [
+ 443
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1019,
+ "label": 5,
+ "text": "Title: Integrity Constraints in ILP using a Monte Carlo approach \nAbstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples. ",
+ "neighbors": [
+ 198
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1020,
+ "label": 0,
+ "text": "Title: Symposium Title: Tutorial Discourse What Makes Human Explanations Effective? \nAbstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples. ",
+ "neighbors": [
+ 1068
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1021,
+ "label": 2,
+ "text": "Title: LU TP 91-4 Self-organizing Networks for Extracting Jet Features \nAbstract: Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b,c and light quarks. ",
+ "neighbors": [
+ 430
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1022,
+ "label": 2,
+ "text": "Title: LU TP 93-13 On Langevin Updating in Multilayer Perceptrons \nAbstract: The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar effect. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1023,
+ "label": 6,
+ "text": "Title: Approximating Hyper-Rectangles: Learning and Pseudo-random Sets \nAbstract: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subprob-lem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of approximating high-dimensional rectangles that arise in PAC learning and pseudorandomness. ",
+ "neighbors": [
+ 62
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1024,
+ "label": 2,
+ "text": "Title: The Functional Transfer of Knowledge for Coronary Artery Disease Diagnosis \nAbstract: A distinction between two forms of task knowledge transfer, representational and functional, is reviewed followed by a discussion of MTL, a modified version of the multiple task learning (MTL) neural network method of functional transfer. The MTL method employs a separate learning rate, k , for each task output node k. k varies as a function of a measure of relatedness, R k , between the kth task and the primary task of interest. An MTL network is applied to a diagnostic domain of four levels of coronary artery disease. Results of experiments demonstrate the ability of MTL to develop a predictive model for one level of disease which has superior diagnostic ability over models produced by either single task learning or standard multiple task learning. ",
+ "neighbors": [
+ 324,
+ 421,
+ 1306
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1025,
+ "label": 2,
+ "text": "Title: Vapnik-Chervonenkis Dimension of Recurrent Neural Networks \nAbstract: Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k w. Ignoring multiplicative constants, the main results say roughly the following: * For architectures with activation = any fixed nonlinear polynomial, the VC dimension is wk. * For architectures with activation = any fixed piecewise polynomial, the VC dimension is between wk and w 2 k. * For architectures with activation = H (threshold nets), the VC dimension is between w log(k=w) and minfwk log wk; w 2 + w log wkg. * For the standard sigmoid (x) = 1=(1 + e x ), the VC dimension is between wk and w 4 k 2 . An earlier version of this paper has appeared in Proc. 3rd European Workshop on Computational Learning Theory, LNCS 1208, pages 223-237, Springer, 1997. ",
+ "neighbors": [
+ 31,
+ 112,
+ 116,
+ 233,
+ 650,
+ 973
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1026,
+ "label": 2,
+ "text": "Title: Learning NonLinearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks \nAbstract: This paper investigates an algorithm for the construction of decisions trees comprised of linear threshold units and also presents a novel algorithm for the learning of non-linearly separable boolean functions using Madaline-style networks which are isomorphic to decision trees. The construction of such networks is discussed, and their performance in learning is compared with standard BackPropagation on a sample problem in which many irrelevant attributes are introduced. Littlestone's Winnow algorithm is also explored within this architecture as a means of learning in the presence of many irrelevant attributes. The learning ability of this Madaline-style architecture on nonoptimal (larger than necessary) networks is also explored. ",
+ "neighbors": [
+ 58,
+ 1028
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1027,
+ "label": 3,
+ "text": "Title: Causal inference, path analysis, and recursive struc-tural equations models. In C. Clogg, editor, Sociological Methodology,\nAbstract: Lipid Research Clinic Program 84] Lipid Research Clinic Program. The Lipid Research Clinics Coronary Primary Prevention Trial results, parts I and II. Journal of the American Medical Association, 251(3):351-374, January 1984. [Pearl 93] Judea Pearl. Aspects of graphical models connected with causality. Technical Report R-195-LL, Cognitive Systems Laboratory, UCLA, June 1993. Submitted to Biometrika (June 1993). Short version in Proceedings of the 49th Session of the International Statistical Institute: Invited papers, Flo rence, Italy, August 1993, Tome LV, Book 1, pp. 391-401. ",
+ "neighbors": [
+ 528,
+ 850,
+ 1125
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1028,
+ "label": 2,
+ "text": "Title: Generating Neural Networks Through the Induction of Threshold Logic Unit Trees (Extended Abstract) \nAbstract: We investigate the generation of neural networks through the induction of binary trees of threshold logic units (TLUs). Initially, we describe the framework for our tree construction algorithm and how such trees can be transformed into an isomorphic neural network topology. Several methods for learning the linear discriminant functions at each node of the tree structure are examined and shown to produce accuracy results that are comparable to classical information theoretic methods for constructing decision trees (which use single feature tests at each node). Our TLU trees, however, are smaller and thus easier to understand. Moreover, we show that it is possible to simultaneously learn both the topology and weight settings of a neural network simply using the training data set that we are given. ",
+ "neighbors": [
+ 58,
+ 374,
+ 1026
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1029,
+ "label": 2,
+ "text": "Title: Experiments with the Cascade-Correlation Algorithm \nAbstract: Technical Report # 91-16 July 1991; Revised August 1991 ",
+ "neighbors": [
+ 283,
+ 1235
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1030,
+ "label": 3,
+ "text": "Title: Learning Convex Sets of Probability from Data \nAbstract: This reproduces a report submitted to Rome Laboratory on October 27, 1994. c flCopyright 1994 by Jon Doyle. All rights reserved. Freely available via http://www.medg.lcs.mit.edu/doyle. Final Report on Rational Distributed Reason Maintenance for Abstract Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the sub-plans improve on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. We develop revision methods aiming to revise only those beliefs and plans worth revising, and to tolerate incoherence and ungroundedness when these are judged less detrimental than a costly revision effort. We use an artificial market economy in planning and revision tasks to arrive at overall judgments of worth, and present a representation for qualitative preferences that permits capture of common forms of dominance information. ",
+ "neighbors": [
+ 987
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1031,
+ "label": 3,
+ "text": "Title: Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions\nAbstract: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much ",
+ "neighbors": [
+ 987,
+ 1069
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1032,
+ "label": 3,
+ "text": "Title: A Comparison of Induction Algorithms for Selective and non-Selective Bayesian Classifiers \nAbstract: In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, high-predictive-accuracy networks. We compare the performance of this classifier with selective and non-selective naive Bayesian classifiers. We show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases analyzed, and hence is an enhancement of the naive Bayesian classifier. Relative to the non-selective Bayesian network classifier, our selective Bayesian network classifier generates networks that are computationally simpler to evaluate and that display predictive accuracy comparable to that of Bayesian networks which model all features.",
+ "neighbors": [
+ 369,
+ 863,
+ 884,
+ 1342
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1033,
+ "label": 3,
+ "text": "Title: Minimax Estimation via Wavelet Shrinkage a pleasure to acknowledge friendly conversations with Gerard Kerkyacharian, \nAbstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled \"Wavelets and Optimal Function Estimation\", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. ",
+ "neighbors": [
+ 408,
+ 929,
+ 1103,
+ 1133,
+ 1301
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1034,
+ "label": 1,
+ "text": "Title: Genetic Programming and Data Structures \nAbstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled \"Wavelets and Optimal Function Estimation\", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. ",
+ "neighbors": [
+ 168,
+ 501,
+ 622,
+ 952,
+ 1105,
+ 1157
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1035,
+ "label": 2,
+ "text": "Title: Theory of Correlations in Stochastic Neural Networks \nAbstract: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1=N , N being the size of the interacting populations. Using the weakness of the cross-correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (non-orthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct ",
+ "neighbors": [
+ 176
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1036,
+ "label": 6,
+ "text": "Title: Stochastic Logic Programs \nAbstract: One way to represent a machine learning algorithm's bias over the hypothesis and instance space is as a pair of probability distributions. This approach has been taken both within Bayesian learning schemes and the framework of U-learnability. However, it is not obvious how an Inductive Logic Programming (ILP) system should best be provided with a probability distribution. This paper extends the results of a previous paper by the author which introduced stochastic logic programs as a means of providing a structured definition of such a probability distribution. Stochastic logic programs are a generalisation of stochastic grammars. A stochastic logic program consists of a set of labelled clauses p : C where p is from the interval [0; 1] and C is a range-restricted definite clause. A stochastic logic program P has a distributional semantics, that is one which assigns a probability distribution to the atoms of each predicate in the Herbrand base of the clauses in P . These probabilities are assigned to atoms according to an SLD-resolution strategy which employs a stochastic selection rule. It is shown that the probabilities can be computed directly for fail-free logic programs and by normalisation for arbitrary logic programs. The stochastic proof strategy can be used to provide three distinct functions: 1) a method of sampling from the Herbrand base which can be used to provide selected targets or example sets for ILP experiments, 2) a measure of the information content of examples or hypotheses; this can be used to guide the search in an ILP system and 3) a simple method for conditioning a given stochastic logic program on samples of data. Functions 1) and 3) are used to measure the generality of hypotheses in the ILP system Progol4.2. This supports an implementation of a Bayesian technique for learning from positive examples only. fl This paper is an extension of a paper with the same title which appeared in [12] ",
+ "neighbors": [
+ 724,
+ 1204
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1037,
+ "label": 5,
+ "text": "Title: Inductive Constraint Logic and the Mutagenesis Problem \nAbstract: A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on the mutagenesis problem. These experiments illustrate some of the differences with other systems, and indicate that our approach should work at least as well as the more classical approaches.",
+ "neighbors": [
+ 374,
+ 573,
+ 1247,
+ 1251
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1038,
+ "label": 1,
+ "text": "Title: Computation. Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming \nAbstract: The design (synthesis) of analog electrical circuits starts with a high-level statement of the circuit's desired behavior and requires creating a circuit that satisfies the specified design goals. Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of all of the circuit's components. The difficulty of the problem of analog circuit synthesis is well known and there is no previously known general automated technique for synthesizing an analog circuit from a high-level statement of the circuit's desired behavior. This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a lowpass filter, a crossover (woofer and tweeter) filter, a source identification circuit, an amplifier, a computational circuit, a time-optimal controller circuit, a temperaturesensing circuit, and a voltage reference circuit. The problemspecific information required for each of the eight problems is minimal and consists primarily of the number of inputs and outputs of the desired circuit, the types of available components, and a fitness measure that restates the high-level statement of the circuit's desired behavior as a measurable mathematical quantity. The eight genetically evolved circuits constitute an instance of an evolutionary computation technique producing results on a task that is usually thought of as requiring human intelligence. The fact that a single uniform approach yielded a satisfactory design for each of the eight circuits as well as the fact that a satisfactory design was created on the first or second run of each problem are evidence for the general applicability of genetic programming for solving the problem of automatic synthesis of analog electrical circuits. ",
+ "neighbors": [
+ 300,
+ 1043
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1039,
+ "label": 2,
+ "text": "Title: EM Algorithms for PCA and SPCA \nAbstract: I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions.",
+ "neighbors": [
+ 1041,
+ 1166
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1040,
+ "label": 2,
+ "text": "Title: A Neural Architecture for Content as well as Address-Based Storage and Recall: Theory and Applications \nAbstract: Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1041,
+ "label": 2,
+ "text": "Title: Mixtures of Probabilistic Principal Component Analysers \nAbstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. ",
+ "neighbors": [
+ 40,
+ 387,
+ 1039,
+ 1118,
+ 1298
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1042,
+ "label": 2,
+ "text": "Title: Geometry of Early Stopping in Linear Networks \nAbstract: ",
+ "neighbors": [
+ 1213
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1043,
+ "label": 1,
+ "text": "Title: AUTOMATED TOPOLOGY AND SIZING OF ANALOG CIRCUITS AUTOMATED DESIGN OF BOTH THE TOPOLOGY AND SIZING\nAbstract: This paper describes an automated process for designing analog electrical circuits based on the principles of natural selection, sexual recombination, and developmental biology. The design process starts with the random creation of a large population of program trees composed of circuit-constructing functions. Each program tree specifies the steps by which a fully developed circuit is to be progressively developed from a common embryonic circuit appropriate for the type of circuit that the user wishes to design. Each fully developed circuit is translated into a netlist, simulated using a modified version of SPICE, and evaluated as to how well it satisfies the user's design requirements. The fitness measure is a user-written computer program that may incorporate any calculable characteristic or combination of characteristics of the circuit, including the circuit's behavior in the time domain, its behavior in the frequency domain, its power consumption, the number of components, cost of components, or surface area occupied by its components. The population of program trees is genetically bred over a series of many generations using genetic programming. Genetic programming is driven by a fitness measure and employs genetic operations such as Darwinian reproduction, sexual recombination (crossover), and occasional mutation to create offspring. This automated evolutionary process produces both the topology of the circuit and the numerical values for each component. This paper describes how genetic programming can evolve the circuit for a difficult-to-design low-pass filter. ",
+ "neighbors": [
+ 300,
+ 1038,
+ 1185,
+ 1325
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1044,
+ "label": 3,
+ "text": "Title: Continuous sigmoidal belief networks trained using slice sampling \nAbstract: Real-valued random hidden variables can be useful for modelling latent structure that explains correlations among observed variables. I propose a simple unit that adds zero-mean Gaussian noise to its input before passing it through a sigmoidal squashing function. Such units can produce a variety of useful behaviors, ranging from deterministic to binary stochastic to continuous stochastic. I show how \"slice sampling\" can be used for inference and learning in top-down networks of these units and demonstrate learning on two simple problems. ",
+ "neighbors": [
+ 19,
+ 433,
+ 1335
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1045,
+ "label": 3,
+ "text": "Title: Sequential Update of Bayesian Network Structure \nAbstract: There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data.",
+ "neighbors": [
+ 42,
+ 238,
+ 321,
+ 994
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1046,
+ "label": 3,
+ "text": "Title: Using Qualitative Relationships for Bounding Probability Distributions \nAbstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. ",
+ "neighbors": [
+ 60,
+ 223,
+ 364,
+ 606,
+ 1192
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1047,
+ "label": 1,
+ "text": "Title: A Comparison of Crossover and Mutation in Genetic Programming \nAbstract: This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of building blocks in GP.",
+ "neighbors": [
+ 501,
+ 1145,
+ 1161,
+ 1174
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1048,
+ "label": 3,
+ "text": "Title: Defining Relative Likelihood in Partially-Ordered Preferential Structures \nAbstract: Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning.",
+ "neighbors": [
+ 196
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1049,
+ "label": 2,
+ "text": "Title: Rapid Quality Estimation of Neural Network Input Representations \nAbstract: The choice of an input representation for a neural network can have a profound impact on its accuracy in classifying novel instances. However, neural networks are typically computationally expensive to train, making it difficult to test large numbers of alternative representations. This paper introduces fast quality measures for neural network representations, allowing one to quickly and accurately estimate which of a collection of possible representations for a problem is the best. We show that our measures for ranking representations are more accurate than a previously published measure, based on experiments with three difficult, real-world pattern recognition problems.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1050,
+ "label": 2,
+ "text": "Title: Rapid Quality Estimation of Neural Network Input Representations \nAbstract: FURTHER RESULTS ON CONTROLLABILITY PROPERTIES OF DISCRETE-TIME NONLINEAR SYSTEMS fl ABSTRACT Controllability questions for discrete-time nonlinear systems are addressed in this paper. In particular, we continue the search for conditions under which the group-like notion of transitivity implies the stronger and semigroup-like property of forward accessibility. We show that this implication holds, pointwise, for states which have a weak Poisson stability property, and globally, if there exists a global \"attractor\" for the system. ",
+ "neighbors": [
+ 963
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1051,
+ "label": 2,
+ "text": "Title: Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms \nAbstract: Constructive learning algorithms offer an approach to incremental construction of near-minimal artificial neural networks for pattern classification. Examples of such algorithms include Tower, Pyramid, Upstart, and Tiling algorithms which construct multilayer networks of threshold logic units (or, multilayer perceptrons). These algorithms differ in terms of the topology of the networks that they construct which in turn biases the search for a decision boundary that correctly classifies the training set. This paper presents an analysis of such algorithms from a geometrical perspective. This analysis helps in a better characterization of the search bias employed by the different algorithms in relation to the geometrical distribution of examples in the training set. Simple experiments with non linearly separable training sets support the results of mathematical analysis of such algorithms. This suggests the possibility of designing more efficient constructive algorithms that dynamically choose among different biases to build near-minimal networks for pattern classification. ",
+ "neighbors": [
+ 288,
+ 1083,
+ 1235,
+ 1236
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1052,
+ "label": 2,
+ "text": "Title: that fits the asymptotics of the problem. References \nAbstract: 1] D. Aldous and P. Shields. A diffusion limit for a class of randomly growing binary trees. Probability Theory, 79:509-542, 1988. [2] R. Breathnach, C. Benoist, K. O'Hare, F. Gannon, and P. Chambon. Ovalbumin gene: Evidence for leader sequence in mRNA and DNA sequences at the exon-intron boundaries. Proceedings of the National Academy of Science, 75:4853-4857, 1978. [3] S. Brunak, J. Engelbrecht, and S. Knudsen. Prediction of human mRNA donor and acceptor sites from the DNA sequence. Journal of Molecular Biology, 220:49, 1991. [4] Jack Cophen and Ian Stewart. The information in your hand. The Mathematical Intelligencer, 13(3), 1991. [5] R. G. Gallager. Information Theory and Reliable Communication. John Wiley & Sons, Inc., 1968. [6] Ali Hariri, Bruce Weber, and John Olmstead. On the validity of Shannon-information calculations for molecular biological sequence. Journal of Theoretical Biology, 147:235-254, 1990. [7] W. B. Davenport Jr. and W. L. Root. An Introduction to the Theory of Random Signals and Noise. McGraw-Hill, 1958. [8] Andrzej Knopka and John Owens. Complexity charts can be used to map functional domains in DNA. Gene Anal. Techn., 6, 1989. [9] S.M. Mount. A catalogue of splice-junction sequences. Nucleic Acids Research, 10:459-472, 1982. [10] H.M. Seidel, D.L. Pompliano, and J.R. Knowles. Exons as microgenes? Science, 257, September 1992. [11] C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27:379-423, 623-656, 1948. [12] Peter S. Shenkin, Batu Erman, and Lucy D. Mastrandrea. Information-theoretical entropy as a measure of sequence variability. Proteins, 11(4):297, 1991. [13] R. Staden. Measurements of the effects that coding for a protein has on a DNA sequence and their use for finding genes. Nucleic Acids Research, 12:551-567, 1984. [14] J.A. Steitz. Snurps. Scientific American, 258(6), June 1988. [15] H. van Trees. Detection, estimation and modulation theory. Wiley, 1971. [16] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. Ar-getsinger Steitz, and A. M. Weiner. Molecular Biology of the Gene. Benjamin/Cummings, Menlo Park, CA, fourth edition, 1987. [17] A.D. Wyner and A.J. Wyner. An improved version of the Lempel-Ziv algorithm. Transactions of Information Theory. [18] A.J. Wyner. String Matching Theorems and Applications to Data Compression and Statistics. PhD thesis, Stanford University, 1993. [19] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, IT-23(3):337-343, 1977. ",
+ "neighbors": [
+ 1113
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1053,
+ "label": 4,
+ "text": "Title: TD Models: Modeling the World at a Mixture of Time Scales \nAbstract: Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985).",
+ "neighbors": [
+ 54,
+ 187,
+ 1128,
+ 1163
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1054,
+ "label": 5,
+ "text": "Title: Abstract \nAbstract: This paper is a scientific comparison of two code generation techniques with identical goals generation of the best possible software pipelined code for computers with instruction level parallelism. Both are variants of modulo scheduling, a framework for generation of software pipelines pioneered by Rau and Glaser [RaGl81], but are otherwise quite dissimilar. One technique was developed at Silicon Graphics and is used in the MIPSpro compiler. This is the production compiler for SGI s systems which are based on the MIPS R8000 processor [Hsu94]. It is essentially a branchandbound enumeration of possible schedules with extensive pruning. This method is heuristic because of the way it prunes and also because of the interaction between register allocation and scheduling. 1 The second technique aims to produce optimal results by formulating the scheduling and register allocation problem as an integrated integer linear programming (ILP 1 ) problem. This idea has received much recent exposure in the literature [AlGoGa95, Feautrier94, GoAlGa94a, GoAlGa94b, Eichenberger95], but to our knowledge all previous implementations have been too preliminary for detailed measurement and evaluation. In particular, we believe this to be the first published measurement of runtime performance for ILP based generation of software pipelines. A particularly valuable result of this study was evaluation of the heuristic pipelining technology in the SGI compiler . One of the motivations behind the McGill research was the hope that optimal software pipelining, while not in itself practical for use in production compilers, would be useful for their evaluation and validation. Our comparison has indeed provided a quantitative validation of the SGI compilers pipeliner, leading us to increased confidence in both techniques. ",
+ "neighbors": [
+ 1127
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1055,
+ "label": 2,
+ "text": "Title: Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION-PROXIMAL POINT ALGORITHM \nAbstract: We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation, and is somewhat more intuitive than the classical proof. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1056,
+ "label": 6,
+ "text": "Title: Learning Distributions from Random Walks \nAbstract: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions.",
+ "neighbors": [
+ 333,
+ 998,
+ 1281
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1057,
+ "label": 6,
+ "text": "Title: Constructing Nominal Xof-N Attributes \nAbstract: Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of Xof-N representations. An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. The promising preliminary experimental results, on both artificial and real-world domains, show that constructing new nominal attributes in the form of Xof-N representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity.",
+ "neighbors": [
+ 58,
+ 892,
+ 917,
+ 1008,
+ 1009,
+ 1340
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1058,
+ "label": 2,
+ "text": "Title: A Delay-Line Based Motion Detection Chip \nAbstract: Inspired by a visual motion detection model for the rabbit retina and by a computational architecture used for early audition in the barn owl, we have designed a chip that employs a correlation model to report the one-dimensional field motion of a scene in real time. Using subthreshold analog VLSI techniques, we have fabricated and successfully tested a 8000 transistor chip using a standard MOSIS process.",
+ "neighbors": [
+ 303,
+ 973,
+ 1322
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1059,
+ "label": 4,
+ "text": "Title: Generalization and scaling in reinforcement learning \nAbstract: In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement function computes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation (CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization.",
+ "neighbors": [
+ 1092,
+ 1153,
+ 1222
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1060,
+ "label": 1,
+ "text": "Title: Voting for Schemata \nAbstract: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem.",
+ "neighbors": [
+ 91,
+ 568,
+ 941
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1061,
+ "label": 2,
+ "text": "Title: Data Mining for Association Rules with Unsupervised Neural Networks \nAbstract: results for Gaussian mixture models and factor analysis are discussed. ",
+ "neighbors": [
+ 19,
+ 387,
+ 1166
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1062,
+ "label": 4,
+ "text": "Title: Associative Reinforcement Learning: Functions in k-DNF \nAbstract: An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in k-DNF. The algorithms are compared with existing methods in empirical trials and are shown to have very good performance. ",
+ "neighbors": [
+ 240,
+ 327
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1063,
+ "label": 3,
+ "text": "Title: A note on acceptance rate criteria for CLTs for Hastings-Metropolis algorithms \nAbstract: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper. ",
+ "neighbors": [
+ 1130,
+ 1200,
+ 1282
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1064,
+ "label": 1,
+ "text": "Title: An Overview of Evolutionary Computation \nAbstract: Evolutionary computation uses computational models of evolution - ary processes as key elements in the design and implementation of computer-based problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and di fferences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
+ "neighbors": [
+ 440,
+ 1154,
+ 1258
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1065,
+ "label": 2,
+ "text": "Title: Task and Spatial Frequency Effects on Face Specialization \nAbstract: There is strong evidence that face processing is localized in the brain. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent mechanisms in the brain. Is neural specialization innate or learned? We suggest that this specialization could be the result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Further, we suggest that the specialization arises as an interaction between task requirements and developmental constraints. In this paper, we present a feed-forward computational model of visual processing, in which two modules compete to classify input stimuli. When one module receives low spatial frequency information and the other receives high spatial frequency information, and the task is to identify the faces while simply classifying the objects, the low frequency network shows a strong specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that an innately-specified face processing module is unnecessary.",
+ "neighbors": [
+ 1271
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1066,
+ "label": 3,
+ "text": "Title: Theoretical rates of convergence for Markov chain Monte Carlo \nAbstract: We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. ",
+ "neighbors": [
+ 21,
+ 947,
+ 949,
+ 1130
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1067,
+ "label": 6,
+ "text": "Title: BOOSTING AND NAIVE BAYESIAN LEARNING \nAbstract: Although so-called naive Bayesian classification makes the unrealistic assumption that the values of the attributes of an example are independent given the class of the example, this learning method is remarkably successful in practice, and no uniformly better learning method is known. Boosting is a general method of combining multiple classifiers due to Yoav Freund and Rob Schapire. This paper shows that boosting applied to naive Bayesian classifiers yields combination classifiers that are representationally equivalent to standard feedforward multilayer perceptrons. (An ancillary result is that naive Bayesian classification is a nonparametric, nonlinear generalization of logistic regression.) As a training algorithm, boosted naive Bayesian learning is quite different from backpropagation, and has definite advantages. Boosting requires only linear time and constant space, and hidden nodes are learned incrementally, starting with the most important. On the real-world datasets on which the method has been tried so far, generalization performance is as good as or better than the best published result using any other learning method. Unlike all other standard learning algorithms, naive Bayesian learning, with and without boosting, can be done in logarithmic time with a linear number of parallel computing units. Accordingly, these learning methods are highly plausible computationally as models of animal learning. Other arguments suggest that they are plausible behaviorally also. ",
+ "neighbors": [
+ 39,
+ 330,
+ 744,
+ 1208,
+ 1262
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1068,
+ "label": 0,
+ "text": "Title: Participating in Instructional Dialogues: Finding and Exploiting Relevant Prior Explanations \nAbstract: In this paper we present our research on identifying and modeling the strategies that human tutors use for integrating previous explanations into current explanations. We have used this work to develop a computational model that has been partially implemented in an explanation facility for an existing tutoring system known as SHERLOCK. We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material, and to avoid repeating old material that would distract the student from what is new. Thus, strategies for using the dialogue history in generating explanations are of great importance to research in natural language generation for tutorial applications. The goal of our work is to produce a computational model of the effects of discourse context on explanations in instructional dialogues, and to implement this model in an intelligent tutoring system that maintains a dialogue history and uses it in planning its explanations. Based on a study of human-human instructional dialogues, we have developed a taxonomy that classifies the types of contextual effects that occur in our data according to the explanatory functions they serve (16). In this paper, we focus on one important category from our taxonomy: situations in which the tutor explicitly refers to a previous explanation in order to point out similarities (differences) between the material currently being explained and material presented in earlier explanation(s). We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in instructional dialogues produced by human tutors. By building a computer system that has this capability as an optional facility that can be enabled or disabled, we will be able to systematically evaluate our hypothesis that this is a useful tutoring strategy. In order to test our hypotheses about the effects of previous discourse on explanations, we are building an explanation component for an existing intelligent training system, Sherlock (11). Sherlock is an intelligent coached practice environment for training avionics technicians to troubleshoot complex electronic equipment. Using Sherlock, trainees solve problems with minimal tutor interaction and then review their troubleshooting behavior in a post-problem reflective follow-up session (rfu) where the tutor instructional dialogues produced by human tutors.",
+ "neighbors": [
+ 1020
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1069,
+ "label": 3,
+ "text": "Title: Rationality and its Roles in Reasoning \nAbstract: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.",
+ "neighbors": [
+ 987,
+ 1031
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1070,
+ "label": 2,
+ "text": "Title: A New Algorithm for DNA Sequence Assembly Running Title: A New Algorithm for DNA Sequence Assembly \nAbstract: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.",
+ "neighbors": [
+ 1071
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1071,
+ "label": 2,
+ "text": "Title: AMASS: A Structured Pattern Matching Approach to Shotgun Sequence Assembly \nAbstract: In this paper, we propose an efficient, reliable shotgun sequence assembly algorithm based on a fingerprinting scheme that is robust to both noise and repetitive sequences in the data. Our algorithm uses exact matches of short patterns randomly selected from fragment data to identify fragment overlaps, construct an overlap map, and finally deliver a consensus sequence. We show how statistical clues made explicit in our approach can easily be exploited to correctly assemble results even in the presence of extensive repetitive sequences. Our approach is exceptionally fast in practice: e.g., we have successfully assembled a whole Mycoplasma genitalium genome (approximately 580 kbps) in roughly 8 minutes of 64MB 200MHz Pentium Pro CPU time from real shotgun data, where most existing algorithms can be expected to run for several hours to a day on the same data. Moreover, experiments with shotgun data synthetically prepared from real DNA sequences from a wide range of organisms (including human DNA) and containing extensive repeating regions demonstrate our algorithm's robustness to noise and the presence of repetitive sequences. For example, we have correctly assembled a 238kbp Human DNA sequence in less than 3 minutes of 64MB 200MHz Pentium Pro CPU time. fl Support for this research was provided in part by the Office of Naval Research through grant N0014-94-1-1178.",
+ "neighbors": [
+ 1070
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1072,
+ "label": 3,
+ "text": "Title: A LOGICAL APPROACH TO REASONING ABOUT UNCERTAINTY: A TUTORIAL \nAbstract: fl This paper will appear in Discourse, Interaction, and Communication, X. Arrazola, K. Korta, and F. J. Pelletier, eds., Kluwer, 1997. Much of this work was performed while the author was at IBM Almaden Research Center. IBM's support is gratefully acknowledged. ",
+ "neighbors": [
+ 265,
+ 1115
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1073,
+ "label": 6,
+ "text": "Title: On Learning Bounded-Width Branching Programs \nAbstract: In this paper, we study PAC-learning algorithms for specialized classes of deterministic finite automata (DFA). In particular, we study branching programs, and we investigate the influence of the width of the branching program on the difficulty of the learning problem. We first present a distribution-free algorithm for learning width-2 branching programs. We also give an algorithm for the proper learning of width-2 branching programs under uniform distribution on labeled samples. We then show that the existence of an efficient algorithm for learning width-3 branching programs would imply the existence of an efficient algorithm for learning DNF, which is not known to be the case. Finally, we show that the existence of an algorithm for learning width-3 branching programs would also yield an algorithm for learning a very restricted version of parity with noise.",
+ "neighbors": [
+ 392,
+ 1220
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1074,
+ "label": 4,
+ "text": "Title: A Computer Scientist's View of Life, the Universe, and Everything \nAbstract: Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. Assumptions. A long time ago, the Great Programmer wrote a program that runs all possible universes on His Big Computer. \"Possible\" means \"computable\": (1) Each universe evolves on a discrete time scale. (2) Any universe's state at a given time is describable by a finite number of bits. One of the many universes is ours, despite some who evolved in it and claim it is incomputable. Computable universes. Let T M denote an arbitrary universal Turing machine with unidirectional output tape. T M 's input and output symbols are \"0\", \"1\", and \",\" (comma). T M 's possible input programs can be ordered alphabetically: \"\" (empty program), \"0\", \"1\", \",\", \"00\", \"01\", \"0,\", \"10\", \"11\", \"1,\", \",0\", \",1\", \",,\", \"000\", etc. Let A k denote T M 's k-th program in this list. Its output will be a finite or infinite string over the alphabet f \"0\",\"1\",\",\"g. This sequence of bitstrings separated by commas will be interpreted as the evolution E k of universe U k . If E k includes at least one comma, then let U l k represents U k 's state at the l-th time step of E k (k; l 2 f1; 2; : : : ; g). E k is represented by the sequence U 1 k corresponds to U k 's big bang. Different algorithms may compute the same universe. Some universes are finite (those whose programs cease producing outputs at some point), others are not. I don't know about ours. TM not important. The choice of the Turing machine is not important. This is due to the compiler theorem: for each universal Turing machine C there exists a constant prefix C 2 f \"0\",\"1\",\",\"g fl such that for all possible programs p, C's output in response to program C p is identical to T M 's output in response to p. The prefix C is the compiler that compiles programs for T M into equivalent programs for C. k denote the l-th (possibly empty) bitstring before the l-th comma. U l",
+ "neighbors": [
+ 38
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1075,
+ "label": 5,
+ "text": "Title: The Predictability of Data Values \nAbstract: Copyright 1997 IEEE. Published in the Proceedings of Micro-30, December 1-3, 1997 in Research Triangle Park, North Carolina. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. ",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1076,
+ "label": 6,
+ "text": "Title: On-Line Portfolio Selection Using Multiplicative Updates \nAbstract: We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algorithm is very simple to implement and requires only constant storage and computing time per stock in each trading period. We tested the performance of our algorithm on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm clearly outperforms the best single stock as well as Cover's universal portfolio selection algorithm. We also present results for the situation in which the investor has access to additional \"side information.\" ",
+ "neighbors": [
+ 1087,
+ 1109,
+ 1203
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1077,
+ "label": 3,
+ "text": "Title: On the Semantics of Belief Revision Systems \nAbstract: We consider belief revision operators that satisfy the Alchourron-Gardenfors-Makinson postulates, and present an epistemic logic in which, for any such revision operator, the result of a revision can be described by a sentence in the logic. In our logic, the fact that the agent's set of beliefs is is represented by the sentence O, where O is Levesque's `only know' operator. Intuitively, O is read as ` is all that is believed.' The fact that the agent believes is represented by the sentence B , read in the usual way as ` is believed'. The connective represents update as defined by Katsuno and Mendelzon. The revised beliefs are represented by the sentence O B . We show that for every revision operator that satisfies the AGM postulates, there is a model for our epistemic logic such that the beliefs implied by the sentence O B in this model correspond exactly to the sentences implied by the theory that results from revising by . This means that reasoning about changes in the agent's beliefs reduces to model checking of certain epistemic sentences. The negative result in the paper is that this type of formal account of revision cannot be extended to the situation where the agent is able to reason about its beliefs. A fully introspective agent cannot use our construction to reason about the results of its own revisions, on pain of triviality. ",
+ "neighbors": [
+ 196,
+ 265,
+ 987
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1078,
+ "label": 2,
+ "text": "Title: Best-First Model Merging for Dynamic Learning and Recognition \nAbstract: Best-first model merging is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both learning and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access.",
+ "neighbors": [
+ 49,
+ 86,
+ 1160,
+ 1248
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1079,
+ "label": 2,
+ "text": "Title: Analysis of Linsker's application of Hebbian rules to Linear Networks \nAbstract: Linsker has reported the development of structured receptive fields in simulations using a Hebb-type synaptic plasticity rule in a feed-forward linear network. The synapses develop under dynamics determined by a matrix that is closely related to the covariance matrix of input cell activities. We analyse the dynamics of the learning rule in terms of the eigenvectors of this matrix. These eigenvectors represent independently evolving weight structures. Some general theorems are presented regarding the properties of these eigenvectors and their eigenvalues. For a general covariance matrix four principal parameter regimes are predicted. We concentrate on the gaussian covariances at layer B ! C of Linsker's network. Analytic and numerical solutions for the eigenvectors at this layer are presented. Three eigenvectors dominate the dynamics: a DC eigenvector, in which all synapses have the same sign; a bi-lobed, oriented eigenvector; and a circularly symmetric, centre-surround eigenvector. Analysis of the circumstances in which each of these vectors dominates yields an explanation of the emergence of centre-surround structures and symmetry-breaking bi-lobed structures. Criteria are developed estimating the boundary of the parameter regime in which centre-surround structures emerge. The application of our analysis to Linsker's higher layers, at which the covariance functions were oscillatory, is briefly discussed. ",
+ "neighbors": [
+ 240,
+ 425
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1080,
+ "label": 3,
+ "text": "Title: WEAK CONVERGENCE AND OPTIMAL SCALING OF RANDOM WALK METROPOLIS ALGORITHMS \nAbstract: This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm, in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to 1. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain, converge to the appropriate limiting Langevin diffusion process. The limiting diffusion approximation admits a straight-forward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions. The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed. ",
+ "neighbors": [
+ 1130,
+ 1228
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1081,
+ "label": 4,
+ "text": "Title: Coordinating Reactive Behaviors keywords: reactive systems, planning and learning \nAbstract: Combinating reactivity with planning has been proposed as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Neural networks can address the tuning problem, but are less useful for decomposition and coordination. We hypothesize that interacting reactions can be decomposed into separate behaviors resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence. ",
+ "neighbors": [
+ 263,
+ 327,
+ 372
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1082,
+ "label": 6,
+ "text": "Title: Teaching a Smarter Learner \nAbstract: We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.",
+ "neighbors": [
+ 179,
+ 572,
+ 1333
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1083,
+ "label": 2,
+ "text": "Title: A Simple Randomized Quantization Algorithm for Neural Network Pattern Classifiers \nAbstract: This paper explores some algorithms for automatic quantization of real-valued datasets using thermometer codes for pattern classification applications. Experimental results indicate that a relatively simple randomized thermometer code generation technique can result in quantized datasets that when used to train simple perceptrons, can yield generalization on test data that is substantially better than that obtained with their unquantized counterparts.",
+ "neighbors": [
+ 288,
+ 1051,
+ 1235
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1084,
+ "label": 1,
+ "text": "Title: Using Modeling Knowledge to Guide Design Space Search \nAbstract: Automated search of a space of candidate designs seems an attractive way to improve the traditional engineering design process. To make this approach work, however, the automated design system must include both knowledge of the modeling limitations of the method used to evaluate candidate designs and also an effective way to use this knowledge to influence the search process. We suggest that a productive approach is to include this knowledge by implementing a set of model constraint functions which measure how much each modeling assumptions is violated, and to influence the search by using the values of these model constraint functions as constraint inputs to a standard constrained nonlinear optimization numerical method. We test this idea in the domain of conceptual design of supersonic transport aircraft, and our experiments indicate that our model constraint communication strategy can decrease the cost of design space search by one or more orders of magnitude. ",
+ "neighbors": [
+ 428,
+ 429,
+ 1334
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1085,
+ "label": 6,
+ "text": "Title: TOWARDS CONCEPT FORMATION GROUNDED ON PERCEPTION AND ACTION OF A MOBILE ROBOT \nAbstract: The recognition of objects and, hence, their descriptions must be grounded in the environment in terms of sensor data. We argue, why the concepts, used to classify perceived objects and used to perform actions on these objects, should integrate action-oriented perceptual features and perception-oriented action features. We present a grounded symbolic representation for these concepts. Moreover, the concepts should be learned. We show a logic-oriented approach to learning grounded concepts. ",
+ "neighbors": [
+ 1142
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1086,
+ "label": 2,
+ "text": "Title: Improving RBF Networks by the Feature Selection Approach EUBAFES \nAbstract: The curse of dimensionality is one of the severest problems concerning the application of RBF networks. The number of RBF nodes and therefore the number of training examples needed grows exponentially with the intrinsic dimensionality of the input space. One way to address this problem is the application of feature selection as a data preprocessing step. In this paper we propose a two-step approach for the determination of an optimal feature subset: First, all possible feature-subsets are reduced to those with best discrimination properties by the application of the fast and robust filter technique EUBAFES. Secondly we use a wrapper approach to judge, which of the pre-selected feature subsets leads to RBF networks with least complexity and best classification accuracy. Experiments are undertaken to show the improvement for RBF networks by our feature selection approach. ",
+ "neighbors": [
+ 242
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1087,
+ "label": 3,
+ "text": "Title: Update rules for parameter estimation in Bayesian networks \nAbstract: This paper re-examines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in on-line learning [12]. We provide a unified framework for parameter estimation that encompasses both on-line learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process. In the batch case, our framework encompasses both the gradient projection algorithm [2, 3] and the EM algorithm [14] for Bayesian networks. The framework also leads to new on-line and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parame ters than does standard EM.",
+ "neighbors": [
+ 255,
+ 321,
+ 336,
+ 1076,
+ 1203
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1088,
+ "label": 0,
+ "text": "Title: Knowledge Compilation and Speedup Learning in Continuous Task Domains \nAbstract: Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define. To improve its performance in continuous problem domains, a problem solver must learn, modify, and use continuous operators that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimo-tor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimen tation.",
+ "neighbors": [
+ 500,
+ 617,
+ 1198
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1089,
+ "label": 1,
+ "text": "Title: A Case Study on Tuning of Genetic Algorithms by Using Performance Evaluation Based on Experimental Design \nAbstract: This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments. ",
+ "neighbors": [
+ 91,
+ 603
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1090,
+ "label": 2,
+ "text": "Title: A Method for Identifying Splice Sites and Translational Start Sites in \nAbstract: This paper describes a new method for determining the consensus sequences that signal the start of translation and the boundaries between exons and introns (donor and acceptor sites) in eukaryotic mRNA. The method takes into account the dependencies between adjacent bases, in contrast to the usual technique of considering each position independently. When coupled with a dynamic program to compute the most likely sequence, new consensus sequences emerge. The consensus sequence information is summarized in conditional probability matrices which, when used to locate signals in uncharacter-ized genomic DNA, have greater sensitivity and specificity than conventional matrices. Species-specific versions of these matrices are especially effective at distinguishing true and false sites. ",
+ "neighbors": [
+ 156,
+ 360,
+ 1113
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1091,
+ "label": 2,
+ "text": "Title: Learning Feature-based Semantics with Simple Recurrent Networks \nAbstract: The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's [9] original recurrent network architecture are introduced. ",
+ "neighbors": [
+ 1160,
+ 1240
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1092,
+ "label": 4,
+ "text": "Title: Emergent Control and Planning in an Autonomous Vehicle \nAbstract: We use a connectionist network trained with reinforcement to control both an autonomous robot vehicle and a simulated robot. We show that given appropriate sensory data and architectural structure, a network can learn to control the robot for a simple navigation problem. We then investigate a more complex goal-based problem and examine the plan-like behavior that emerges. ",
+ "neighbors": [
+ 1059
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1093,
+ "label": 0,
+ "text": "Title: Applying Case-Based Reasoning to Control in Robotics \nAbstract: The proposed architecture is experimentally evaluated on two real world domains and the results are compared to other machine learning algorithms applied to the same problem.",
+ "neighbors": [
+ 825,
+ 1096
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1094,
+ "label": 6,
+ "text": "Title: Tracking Drifting Concepts By Minimizing Disagreements \nAbstract: In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy *. Furthermore, the complexity of the class H of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class H can be approximated to within a factor k, then there is a simple tracking algorithm for H which can achieve a probability * of making a mistake if the target movement rate is at most a constant times * 2 =(k(d + k) ln 1 * ), where d is the Vapnik-Chervonenkis dimension of H. Also, we show that if H is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for H which tolerates drift rates up to a constant times * 2 =(d 2 ln 1 In addition, we prove complementary results for the classes of halfspaces and axis-aligned hy- perrectangles showing that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times * 2 =d. ",
+ "neighbors": [
+ 62,
+ 346
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1095,
+ "label": 0,
+ "text": "Title: A Similarity-Based Retrieval Tool for Software Repositories \nAbstract: In this paper we present a prototype of a flexible similarity-based retrieval system. Its flexibility is supported by allowing for an imprecisely specified query. Moreover, our algorithm allows for assessing if the retrieved items are relevant in the initial context, specified in the query. The presented system can be used as a supporting tool for a software repository. We also discuss system evaluation with concerns on usefulness, scalability, applicability and comparability. Evaluation of the T A3 system on three domains gives us encouraging results and an integration of TA3 into a real software repository as a retrieval tool is ongoing. ",
+ "neighbors": [
+ 825,
+ 1096
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1096,
+ "label": 0,
+ "text": "Title: A Case-Based Reasoning Approach \nAbstract: The AAAI Fall Symposium; Flexible Computation in Intelligent Systems: Results, Issues, and Opportunities. Nov. 9-11, 1996, Cambridge, MA Abstract This paper presents a case-based reasoning system TA3. We address the flexibility of the case-based reasoning process, namely flexible retrieval of relevant experiences, by using a novel similarity assessment theory. To exemplify the advantages of such an approach, we have experimentally evaluated the system and compared its performance to the performance of non-flexible version of TA3 and to other machine learning algorithms on several domains. ",
+ "neighbors": [
+ 1093,
+ 1095,
+ 1098
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1097,
+ "label": 3,
+ "text": "Title: A Market Framework for Pooling Opinions \nAbstract: Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular assumptions or measures. Many researchers over many years have failed to form a consensus on which method is best. We propose a market-based pooling procedure, and analyze its properties. Participants bet on securities, each paying off contingent on an uncertain event, so as to maximize their own expected utilities. The consensus probability of each event is defined as the corresponding security's equilibrium price. The market framework provides explicit monetary incentives for participation and honesty, and allows agents to maintain individual rationality and limited privacy. \"No arbitrage\" arguments ensure that the equilibrium prices form legal probabilities. We show that, when events are disjoint and all participants have exponential utility for money, the market derives the same result as the logarithmic opinion pool; similarly, logarithmic utility for money yields the linear opinion pool. In both cases, we prove that the group's behavior is, to an outside observer, indistinguishable from that of a rational individual, whose beliefs equal the equilibrium prices. ",
+ "neighbors": [
+ 988
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1098,
+ "label": 6,
+ "text": "Title: On the Informativeness of the DNA Promoter Sequences Domain Theory \nAbstract: The DNA promoter sequences domain theory and database have become popular for testing systems that integrate empirical and analytical learning. This note reports a simple change and reinterpretation of the domain theory in terms of M-of-N concepts, involving no learning, that results in an accuracy of 93.4% on the 106 items of the database. Moreover, an exhaustive search of the space of M-of-N domain theory interpretations indicates that the expected accuracy of a randomly chosen interpretation is 76.5%, and that a maximum accuracy of 97.2% is achieved in 12 cases. This demonstrates the informativeness of the domain theory, without the complications of understanding the interactions between various learning algorithms and the theory. In addition, our results help characterize the difficulty of learning using the DNA promoters theory.",
+ "neighbors": [
+ 88,
+ 1096,
+ 1339
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1099,
+ "label": 2,
+ "text": "Title: Rearrangement of receptive field topography after intracortical and peripheral stimulation: The role of plasticity in\nAbstract: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The \"EXIN\" (afferent excitatory and lateral inhibitory) synaptic plasticity rules ",
+ "neighbors": [
+ 203,
+ 1167
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1100,
+ "label": 5,
+ "text": "Title: A Partial Memory Incremental Learning Methodology And Its Application To Computer Intrusion Detection \nAbstract: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The \"EXIN\" (afferent excitatory and lateral inhibitory) synaptic plasticity rules ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1101,
+ "label": 0,
+ "text": "Title: LINNEO A Classification Methodology for Ill-structured Domains \nAbstract: In this work we present a classification methodology (LINNEO + ) to discover concepts from ill-structured domains and to organize hierarchies with them. In order to achieve this aim LINNEO + uses conceptual learning techniques and classification. The final target is to build knowledge bases after expert validation. Some techniques for the improvement of the results in the classification step are used, like biasing using partial expert knowledge (classification rules or causal and structural dependencies between attributes) or delayed cluster assignation of objects. Also some comparisons with a few well-known systems are shown.",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1102,
+ "label": 2,
+ "text": "Title: Data Mining for Association Rules with Unsupervised Neural Networks \nAbstract: results for Gaussian mixture models and factor analysis are discussed. ",
+ "neighbors": [
+ 19,
+ 387,
+ 1166
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1103,
+ "label": 3,
+ "text": "Title: DE-NOISING BY reconstruction f n is defined in the wavelet domain by translating all the\nAbstract: p n. We prove two results about that estimator. [Smooth]: With high probability ^ f fl n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with Acknowledgements. These results were described at the Symposium on Wavelet Theory, held in connection with the Shanks Lectures at Van-derbilt University, April 3-4 1992. The author would like to thank Professor L.L. Schumaker for hospitality at the conference, and R.A. DeVore, Iain Johnstone, Gerard Kerkyacharian, Bradley Lucier, A.S. Nemirovskii, Ingram Olkin, and Dominique Picard for interesting discussions and correspondence on related topics. The author is also at the University of California, Berkeley ",
+ "neighbors": [
+ 1033,
+ 1133
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1104,
+ "label": 2,
+ "text": "Title: TREE CONTRACTIONS AND EVOLUTIONARY TREES \nAbstract: An evolutionary tree is a rooted tree where each internal vertex has at least two children and where the leaves are labeled with distinct symbols representing species. Evolutionary trees are useful for modeling the evolutionary history of species. An agreement subtree of two evolutionary trees is an evolutionary tree which is also a topological subtree of the two given trees. We give an algorithm to determine the largest possible number of leaves in any agreement subtree of two trees T 1 and T 2 with n leaves each. If the maximum degree d of these trees is bounded by a constant, the time complexity is O(n log 2 n) and is within a log n factor of optimal. For general d, this algorithm runs in O(nd 2 log d log 2 n) time or alternately in O(nd p d log 3 n) time. ",
+ "neighbors": [
+ 172,
+ 998
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1105,
+ "label": 1,
+ "text": "Title: Price's Theorem and the MAX Problem \nAbstract: We present a detailed analysis of the evolution of GP populations using the problem of finding a program which returns the maximum possible value for a given terminal and function set and a depth limit on the program tree (known as the MAX problem). We confirm the basic message of [ Gathercole and Ross, 1996 ] that crossover together with program size restrictions can be responsible for premature convergence to a sub-optimal solution. We show that this can happen even when the population retains a high level of variety and show that in many cases evolution from the sub-optimal solution to the solution is possible if sufficient time is allowed. In both cases theoretical models are presented and compared with actual runs. Experimental evidence is presented that Price's Covariance and Selection Theorem can be applied to GP populations and the practical effect of program size restrictions are noted. Finally we show that covariance between gene frequency and fitness in the first few generations can be used to predict the course of GP runs.",
+ "neighbors": [
+ 707,
+ 1034,
+ 1145,
+ 1178
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1106,
+ "label": 3,
+ "text": "Title: A Probabilistic Calculus of Actions \nAbstract: We present a symbolic machinery that admits both probabilistic and causal information about a given domain, and produces probabilistic statements about the effect of actions and the impact of observations. The calculus admits two types of conditioning operators: ordinary Bayes conditioning, P (yjX = x), which represents the observation X = x, and causal conditioning, P (yjdo(X = x)), read: the probability of Y = y conditioned on holding X constant (at x) by deliberate action. Given a mixture of such observational and causal sentences, together with the topology of the causal graph, the calculus derives new conditional probabilities of both types, thus enabling one to quantify the effects of actions and observations.",
+ "neighbors": [
+ 141,
+ 451,
+ 850,
+ 1140
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1107,
+ "label": 1,
+ "text": "Title: A Cooperative Coevolutionary Approach to Function Optimization \nAbstract: A general model for the coevolution of cooperating species is presented. This model is instantiated and tested in the domain of function optimization, and compared with a traditional GA-based function optimizer. The results are encouraging in two respects. They suggest ways in which the performance of GA and other EA-based optimizers can be improved, and they suggest a new approach to evolving complex structures such as neural networks and rule sets.",
+ "neighbors": [
+ 205,
+ 415,
+ 634,
+ 709,
+ 851
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1108,
+ "label": 0,
+ "text": "Title: The Utility of Knowledge in Inductive Learning Running Head: Knowledge in Inductive Learning \nAbstract: This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.",
+ "neighbors": [
+ 342,
+ 858,
+ 1320
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1109,
+ "label": 6,
+ "text": "Title: Universal Portfolios With and Without Transaction Costs \nAbstract: A constant rebalanced portfolio is an investment strategy which keeps the same distribution of wealth among a set of stocks from period to period. Recently there has been work on on-line investment strategies that are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996a; Cover and Ordentlich, 1996b; Ordentlich and Cover, 1996; Cover, 1996). For the universal algorithm of Cover (Cover, 1991), we provide a simple analysis which naturally extends to the case of a fixed percentage transaction cost (commission), answering a question raised in (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996a; Cover and Ordentlich, 1996b; Ordentlich and Cover, 1996; Cover, 1996). In addition, we present a simple randomized implementation that is significantly faster in practice. We conclude by explaining how these algorithms can be applied to other problems, such as combining the predictions of statistical language models, where the resulting guarantees are more striking. ",
+ "neighbors": [
+ 255,
+ 1076
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1110,
+ "label": 3,
+ "text": "Title: Interpretation of Complex Scenes Using Bayesian Networks \nAbstract: In most object recognition systems, interactions between objects in a scene are ignored and the best interpretation is considered to be the set of hypothesized objects that matches the greatest number of image features. We show how image interpretation can be cast as the problem of finding the most probable explanation (MPE) in a Bayesian network that models both visual and physical object interactions. The problem of how to determine exact conditional probabilities for the network is shown to be unimportant, since the goal is to find the most probable configuration of objects, not to calculate absolute probabilities. We furthermore show that evaluating configurations by feature counting is equivalent to calculating the joint probability of the configuration using a restricted Bayesian network, and derive the assumptions about probabilities necessary to make a Bayesian formulation reasonable.",
+ "neighbors": [
+ 1137
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1111,
+ "label": 5,
+ "text": "Title: A Note on Scheduling Algorithms for Processors with Lookahead \nAbstract: ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1112,
+ "label": 5,
+ "text": "Title: Theoretical Modeling of Superscalar Processor Performance \nAbstract: The current trace-driven simulation approach to determine superscalar processor performance is widely used but has some shortcomings. Modern benchmarks generate extremely long traces, resulting in problems with data storage, as well as very long simulation run times. More fundamentally, simulation generally does not provide significant insight into the factors that determine performance or a characterization of their interactions. This paper proposes a theoretical model of superscalar processor performance that addresses these shortcomings. Performance is viewed as an interaction of program parallelism and machine parallelism. Both program and machine parallelisms are decomposed into multiple component functions. Methods for measuring or computing these functions are described. The functions are combined to provide a model of the interaction between program and machine parallelisms and an accurate estimate of the performance. The computed performance, based on this model, is compared to simulated performance for six benchmarks from the SPEC 92 suite on several configurations of the IBM RS/6000 instruction set architecture. ",
+ "neighbors": [
+ 423,
+ 1332
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1113,
+ "label": 2,
+ "text": "Title: Prediction of human mRNA donor and acceptor sites from the DNA sequence \nAbstract: Artificial neural networks have been applied to the prediction of splice site location in human pre-mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives | in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism in vivo. When the presented method detects 95% of the true donor and acceptor sites it makes less than 0.1% false donor site assignments and less than 0.4% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non-coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non-coding signal and many stronger splice sites having more ill-defined transitions between coding and non-coding. ",
+ "neighbors": [
+ 358,
+ 1011,
+ 1052,
+ 1090,
+ 1273,
+ 1299
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1114,
+ "label": 2,
+ "text": "Title: Approximation from shift-invariant subspaces of L \nAbstract: A complete characterization is given of closed shift-invariant subspaces of L 2 (IR d ) which provide a specified approximation order. When such a space is principal (i.e., generated by a single function), then this characterization is in terms of the Fourier transform of the generator. As a special case, we obtain the classical Strang-Fix conditions, but without requiring the generating function to decay at infinity. The approximation order of a general closed shift-invariant space is shown to be already realized by a specifiable principal subspace.",
+ "neighbors": [
+ 211,
+ 1300
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1115,
+ "label": 3,
+ "text": "Title: Modeling Belief in Dynamic Systems. Part I: Foundations \nAbstract: Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes if he knows that is more plausible than :. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin for modeling knowledge in multi-agent systems. We then examine the problem of \"minimal change\". This notion can be captured by using prior plausibilities, an analogue to prior probabilities, which can be updated by \"conditioning\". We show by example that conditioning on a plausibility measure can capture many scenarios of interest. In a companion paper, we show how the two best-studied scenarios of belief change, belief revision and belief update, fit into our framework. ? Some of this work was done while both authors were at the IBM Almaden Research Center. The first author was also at Stanford while much of the work was done. IBM and Stanford's support are gratefully acknowledged. The work was also supported in part by the Air Force Office of Scientific Research (AFSC), under Contract F49620-91-C-0080 and grant F94620-96-1-0323 and by NSF under grants IRI-95-03109 and IRI-96-25901. A preliminary version of this paper appears in Proceedings of the 5th Conference on Theoretical Aspects of Reasoning About Knowledge, 1994, pp. 44-64, under the title \"A knowledge-based framework for belief change, Part I: Foundations\". ",
+ "neighbors": [
+ 160,
+ 1072
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1116,
+ "label": 0,
+ "text": "Title: Preparing Case Retrieval Nets for Distributed Processing \nAbstract: In this paper, we discuss two approaches of applying the memory model of Case Retrieval Nets to applications where a distributed processing of information is required. For this, we distinguish two types of such applications, namely (a) the case of distributed case libraries and (b) the case of distributed cases. While a solution to the former is straightforward, the latter requires an extension to Case Retrieval Nets which provides a kind of partitioning of the entire net structure. This extended model even allows for a concurrent implementation of the retrieval process or for the use of collaborative agents for retrieval. Keywords: Case-based reasoning, case retrieval, memory structures, distributed processing. ",
+ "neighbors": [
+ 37,
+ 41,
+ 1004,
+ 1005,
+ 1010
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1117,
+ "label": 0,
+ "text": "Title: Justification Structures for Document Reuse \nAbstract: Document drafting|an important problem-solving task of professionals in a wide variety of fields|typifies a design task requiring complex adaptation for case reuse. This paper proposes a framework for document reuse based on an explicit representation of the illocutionary and rhetorical structure underlying documents. Explicit representation of this structure facilitates (1) interpretation of previous documents by enabling them to \"explain themselves,\" (2) construction of documents by enabling document drafters to issue goal-based specifications and rapidly retrieve documents with similar intentional structure, and (3) mainte nance of multi-generation documents.",
+ "neighbors": [
+ 378,
+ 1268
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1118,
+ "label": 2,
+ "text": "Title: A Hierarchical Latent Variable Model for Data Visualization \nAbstract: Visualization has proven to be a powerful and widely-applicable tool for the analysis and interpretation of multi-variate data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach on a toy data set, and we then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines, and to data in 36 dimensions derived from satellite images. A Matlab software implementation of the algorithm is publicly available from the world-wide web. ",
+ "neighbors": [
+ 40,
+ 1041
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1119,
+ "label": 2,
+ "text": "Title: On the Usage of Differential Evolution for Function Optimization Differential Evolution (DE) has recently proven\nAbstract: assumed unless otherwise stated. Basically, DE generates new parameter vectors by adding the weighted difference between two population vectors to a third vector. If the resulting vector yields a lower objective function value than a predetermined population member, the newly generated vector replaces the vector, with which it was compared, in the next generation; otherwise, the old vector is retained. This basic principle, however, is extended when it comes to the practical variants of DE. For example an existing vector can be perturbed by adding more than one weighted difference vector to it. In most cases, it is also worthwhile to mix the parameters of the old vector with those of the perturbed one before comparing the objective function values. Several variants of DE which have proven to be useful will be described in the ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1120,
+ "label": 5,
+ "text": "Title: Applying ILP to Diterpene Structure Elucidation from 13 C NMR Spectra \nAbstract: We present a novel application of ILP to the problem of diterpene structure elucidation from 13 C NMR spectra. Diterpenes are organic compounds of low molecular weight that are based on a skeleton of 20 carbon atoms. They are of significant chemical and commercial interest because of their use as lead compounds in the search for new pharmaceutical effectors. The structure elucidation of diterpenes based on 13 C NMR spectra is usually done manually by human experts with specialized background knowledge on peak patterns and chemical structures. In the process, each of the 20 skeletal atoms is assigned an atom number that corresponds to its proper place in the skeleton and the diterpene is classified into one of the possible skeleton types. We address the problem of learning classification rules from a database of peak patterns for diterpenes with known structure. Recently, propositional learning was successfully applied to learn classification rules from spectra with assigned atom numbers. As the assignment of atom numbers is a difficult process in itself (and possibly indistinguishable from the classification process), we apply ILP, i.e., relational learning, to the problem of classifying spectra without assigned atom numbers. ",
+ "neighbors": [
+ 239,
+ 1159,
+ 1247,
+ 1308
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1121,
+ "label": 6,
+ "text": "Title: Learning to Classify Sensor Data inductive bias, supervised Bayesian learning, minimum description length. \nAbstract: ",
+ "neighbors": [
+ 1331
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1122,
+ "label": 2,
+ "text": "Title: Learning Polynomial Functions by Feature Construction \nAbstract: We present a method for learning higher-order polynomial functions from examples using linear regression and feature construction. Regression is used on a set of training instances to produce a weight vector for a linear function over the feature set. If this hypothesis is imperfect, a new feature is constructed by forming the product of the two features that most effectively predict the squared error of the current hypothesis. The algorithm is then repeated. In an extension to this method, the specific pair of features to combine is selected by measuring their joint ability to predict the hypothesis' error.",
+ "neighbors": [
+ 1305
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1123,
+ "label": 3,
+ "text": "Title: Sonderforschungsbereich 314 K unstliche Intelligenz Wissensbasierte Systeme KI-Labor am Lehrstuhl f ur Informatik IV Numerical\nAbstract: Some problems can be solved only by multi-agent teams. In using genetic programming to produce such teams, one faces several design decisions. First, there are questions of team diversity and of breeding strategy. In one commonly used scheme, teams consist of clones of single individuals; these individuals breed in the normal way and are cloned to form teams during fitness evaluation. In contrast, teams could also consist of distinct individuals. In this case one can either allow free interbreeding between members of different teams, or one can restrict interbreeding in various ways. A second design decision concerns the types of coordination-facilitating mechanisms provided to individual team members; these range from sensors of various sorts to complex communication systems. This paper examines three breeding strategies (clones, free, and restricted) and three coordination mechanisms (none, deictic sensing, and name-based sensing) for evolving teams of agents in the Serengeti world, a simple predator/prey environment. Among the conclusions are the fact that a simple form of restricted interbreeding outperforms free interbreeding in all teams with distinct individuals, and the fact that name-based sensing consistently outperforms deictic sensing.",
+ "neighbors": [
+ 364,
+ 711,
+ 1191
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1124,
+ "label": 2,
+ "text": "Title: MULTIPLE SCALES OF BRAIN-MIND INTERACTIONS \nAbstract: Posner and Raichle's Images of Mind is an excellent educational book and very well written. Some aws as a scientific publication are: (a) the accuracy of the linear subtraction method used in PET is subject to scrutiny by further research at finer spatial-temporal resolutions; (b) lack of accuracy of the experimental paradigm used for EEG complementary studies. Images (Posner & Raichle, 1994) is an excellent introduction to interdisciplinary research in cognitive and imaging science. Well written and illustrated, it presents concepts in a manner well suited both to the layman/undergraduate and to the technical nonexpert/graduate student and postdoctoral researcher. Many, not all, people involved in interdisciplinary neuroscience research agree with the P & R's statements on page 33, on the importance of recognizing emergent properties of brain function from assemblies of neurons. It is clear from the sparse references that this book was not intended as a standalone review of a broad field. There are some aws in the scientific development, but this must be expected in such a pioneering venture. P & R hav e proposed many cognitive mechanisms deserving further study with imaging tools yet to be developed which can yield better spatial-temporal resolutions. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1125,
+ "label": 3,
+ "text": "Title: UNIVERSAL FORMULAS FOR TREATMENT EFFECTS FROM NONCOMPLIANCE DATA \nAbstract: This paper establishes formulas that can be used to bound the actual treatment effect in any experimental study in which treatment assignment is random but subject compliance is imperfect. These formulas provide the tightest bounds on the average treatment effect that can be inferred given the distribution of assignments, treatments, and responses. Our results reveal that even with high rates of noncompliance, experimental data can yield significant and sometimes accurate information on the effect of a treatment on the population.",
+ "neighbors": [
+ 742,
+ 1027
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1126,
+ "label": 3,
+ "text": "Title: Bayesian Design for the Normal Linear Model with Unknown Error Variance \nAbstract: Most of the Bayesian theory of optimal experimental design, for the normal linear model, has been developed under the restrictive assumption that the variance is known. In special cases, insensitivity of specific design criteria to specific prior assumptions on the variance has been demonstrated, but a general result to show the way in which Bayesian optimal designs are affected by prior information about the variance is lacking. This paper stresses the important distinction between expected utility functions and optimality criteria, examines a number of expected utility functions some of which possess interesting properties, and deserve wider use and derives the relevant Bayesian optimality criteria under normal assumptions. This unifying setup is useful for proving the main result of the paper, that clarifies the issue of designing for the normal linear model with unknown variance. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1127,
+ "label": 5,
+ "text": "Title: Scheduling and Mapping: Software Pipelining in the Presence of Structural Hazards proposed formulation and a\nAbstract: Recently, software pipelining methods based on an ILP (Integer Linear Programming) framework have been successfully applied to derive rate-optimal schedules for architectures involving clean pipelines | pipelines without structural hazards. The problem for architectures beyond such clean pipelines remains open. One challenge is how, under a unified ILP framework, to simultaneously represent resource constraints for unclean pipelines, and the assignment or mapping of operations from a loop to those pipelines. In this paper we provide a framework which does exactly this, and in addition constructs rate-optimal software pipelined schedules. ",
+ "neighbors": [
+ 1054
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1128,
+ "label": 4,
+ "text": "Title: Multi-time Models for Temporally Abstract Planning \nAbstract: Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models [Sutton, 1995] and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to the Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential advantages in a The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, 1998). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995) based on prior work by Singh , Dayan , and Sutton and Pinette . This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes an extension of the approach including actions and control of the environment [Precup & Sutton, 1997]. In particular, we generalize the usual notion of a gridworld planning task.",
+ "neighbors": [
+ 1053,
+ 1147,
+ 1163
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1129,
+ "label": 0,
+ "text": "Title: A Yardstick for the Evaluation of Case-Based Classifiers \nAbstract: This paper proposes that the generalisation capabilities of a case-based reasoning system can be evaluated by comparison with a `rote-learning' algorithm which uses a very simple generalisation strategy. Two such algorithms are defined, and expressions for their classification accuracy are derived as a function of the size of training sample. A series of experiments using artificial and `natural' data sets is described in which the learning curve for a case-based learner is compared with those for the apparently trivial rote-learning learning algorithms. The results show that in a number of `plausible' situations, the learning curves for a simple case-based learner and the `majority' rote-learner can barely be distinguished, although a domain is demonstrated where favourable performance from the case-based learner is observed. This suggests that the maxim of case-based reasoning that `similar problems have similar solutions' may be useful as the basis of a generalisation strategy only in selected domains.",
+ "neighbors": [
+ 886,
+ 1210
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1130,
+ "label": 3,
+ "text": "Title: Rates of convergence of the Hastings and Metropolis algorithms \nAbstract: We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective. ",
+ "neighbors": [
+ 517,
+ 947,
+ 949,
+ 1063,
+ 1066,
+ 1080,
+ 1200
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1131,
+ "label": 6,
+ "text": "Title: WORST CASE PREDICTION OVER SEQUENCES UNDER LOG LOSS \nAbstract: We consider the game of sequentially assigning probabilities to future data based on past observations under logarithmic loss. We are not making probabilistic assumptions about the generation of the data, but consider a situation where a player tries to minimize his loss relative to the loss of the (with hindsight) best distribution from a target class for the worst sequence of data. We give bounds on the minimax regret in terms of the metric entropies of the target class with respect to suitable distances between distributions. ",
+ "neighbors": [
+ 255
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1132,
+ "label": 0,
+ "text": "Title: Similarity Metrics: A Formal Unification of Cardinal and Non-Cardinal Similarity Measures \nAbstract: In [9] we introduced a formal framework for constructing ordinal similarity measures, and suggested how this might also be applied to cardinal measures. In this paper we will place this approach in a more general framework, called similarity metrics. In this framework, ordinal similarity metrics (where comparison returns a boolean value) can be combined with cardinal metrics (returning a numeric value) and, indeed, with metrics returning values of other types, to produce new metrics.",
+ "neighbors": [
+ 37,
+ 166
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1133,
+ "label": 3,
+ "text": "Title: Wavelet Shrinkage: Asymptopia? \nAbstract: Considerable effort has been directed recently to develop asymptotically minimax methods in problems of recovering infinite-dimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly- or exactly- minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount method is different from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions - e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is a much broader near-optimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and information-based complexity. Acknowledgements: These results have been described at the Oberwolfach meeting `Mathematische Stochastik' December, 1992 and at the AMS Annual meeting, January 1993. This work was supported by NSF DMS 92-09130. The authors would like to thank Paul-Louis Hennequin, who organized the Ecole d' Ete de Probabilites at Saint Flour 1990, where this collaboration began, and to Universite de Paris VII (Jussieu) and Universite de Paris-sud (Orsay) for supporting visits of DLD and IMJ. The authors would like to thank Ildar Ibragimov and Arkady Nemirovskii for personal correspondence cited below. p",
+ "neighbors": [
+ 1033,
+ 1103
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1134,
+ "label": 3,
+ "text": "Title: group, and despite having just 337 subjects, the study strongly supports Identification of causal effects\nAbstract: Figure 8a and Figure 8b show the prior distribution over f(-CR ) that follows from the flat prior and the skewed prior, respectively. Figure 8c and Figure 8d show the posterior distribution p(f (-CR jD)) obtained by our system when run on the Lipid data, using the flat prior and the skewed prior, respectively. From the bounds of Balke and Pearl (1994), it follows that under the large-sample assumption, 0:51 f (-CR jD) 0:86. Figure 8: Prior (a, b) and posterior (c,d) distributions for a subpopulation f (-CR jD) specified by the counter-factual query \"Would Joe have improved had he taken the drug, given that he did not improve without it\". (a) corresponds to the flat prior, (b) to the skewed prior. This paper identifies and demonstrates a new application area for network-based inference techniques - the management of causal analysis in clinical experimentation. These techniques, which were originally developed for medical diagnosis, are shown capable of circumventing one of the major problems in clinical experiments the assessment of treatment efficacy in the face of imperfect compliance. While standard diagnosis involves purely probabilistic inference in fully specified networks, causal analysis involves partially specified networks in which the links are given causal interpretation and where the domain of some variables are unknown. The system presented in this paper provides the clinical research community, we believe for the first time, an assumption-free, unbiased assessment of the average treatment effect. We offer this system as a practical tool to be used whenever full compliance cannot be enforced and, more broadly, whenever the data available is insufficient for answering the queries of interest to the clinical investigator. Lipid Research Clinic Program. 1984. The lipid research clinics coronary primary prevention trial results, parts i and ii. Journal of the American Medical Association 251(3):351-374. January. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1135,
+ "label": 3,
+ "text": "Title: On The Foundation Of Structural Equation Models or \nAbstract: When Can We Give Causal Interpretation Abstract The assumptions underlying statistical estimation are of fundamentally different character from the causal assumptions that underly structural equation models (SEM). The differences have been blurred through the years for the lack of a mathematical notation capable of distinguishing causal from equational relationships. Recent advances in graphical methods provide formal explication of these differences, and are destined to have profound impact on SEM's practice and philosophy.",
+ "neighbors": [
+ 742,
+ 1140
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1136,
+ "label": 5,
+ "text": "Title: Speculative Hedge: Regulating Compile-Time Speculation Against Profile Variations code performance in the presence of execution\nAbstract: Path-oriented scheduling methods, such as trace scheduling and hyperblock scheduling, use speculation to extract instruction-level parallelism from control-intensive programs. These methods predict important execution paths in the current scheduling scope using execution profiling or frequency estimation. Aggressive speculation is then applied to the important execution paths, possibly at the cost of degraded performance along other paths. Therefore, the speed of the output code can be sensitive to the compiler's ability to accurately predict the important execution paths. Prior work in this area has utilized the speculative yield function by Fisher, coupled with dependence height, to distribute instruction priority among execution paths in the scheduling scope. While this technique provides more stability of performance by paying attention to the needs of all paths, it does not directly address the problem of mismatch between compile-time prediction and run-time behavior. The work presented in this paper extends the speculative yield and dependence height heuristic to explicitly minimize the penalty suffered by other paths when instructions are speculated along a path. Since the execution time of a path is determined by the number of cycles spent between a path's entrance and exit in the scheduling scope, the heuristic attempts to eliminate unnecessary speculation that delays any path's exit. Such control of speculation makes the performance much less sensitive to the actual path taken at run time. The proposed method has a strong emphasis on achieving minimal delay to all exits. Thus the name, speculative hedge, is used. This paper presents the speculative hedge heuristic, and shows how it controls over-speculation in a superblock/hyperblock scheduler. The stability of out Copyright 1996 IEEE. Published in the Proceedings of the 29th Annual International Symposium on Microarchitecture, De-cember 2-4, 1996, Paris, France. Personal use of this material is permitted. However, permission to reprint/republish this material for resale or redistribution purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966 ",
+ "neighbors": [
+ 1002
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1137,
+ "label": 3,
+ "text": "Title: Efficient Inference in Bayes Networks As A Combinatorial Optimization Problem \nAbstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that efficient probabilistic inference in a belief network is a problem of finding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance. ",
+ "neighbors": [
+ 295,
+ 965,
+ 1110
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1138,
+ "label": 1,
+ "text": "Title: Auto-teaching: networks that develop their own teaching input \nAbstract: Backpropagation learning (Rumelhart, Hinton and Williams, 1986) is a useful research tool but it has a number of undesiderable features such as having the experimenter decide from outside what should be learned. We describe a number of simulations of neural networks that internally generate their own teaching input. The networks generate the teaching input by trasforming the network input through connection weights that are evolved using a form of genetic algorithm. What results is an innate (evolved) capacity not to behave efficiently in an environment but to learn to behave efficiently. The analysis of what these networks evolve to learn shows some interesting results. ",
+ "neighbors": [
+ 70,
+ 308,
+ 430,
+ 646,
+ 1222
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1139,
+ "label": 3,
+ "text": "Title: Probabilistic evaluation of counterfactual queries \nAbstract: To appear in the Twelfth National Conference on Artificial Intelligence (AAAI-94), Seattle, WA, July 31 August 4, 1994. Technical Report R-213-A April, 1994 Abstract Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, and determination of liability. We present a formalism that uses probabilistic causal networks to evaluate one's belief that the counterfactual consequent, C, would have been true if the antecedent, A, were true. The antecedent of the query is interpreted as an external action that forces the proposition A to be true, which is consistent with Lewis' Miraculous Analysis. This formalism offers a concrete embodiment of the \"closest world\" approach which (1) properly reflects common understanding of causal influences, (2) deals with the uncertainties inherent in the world, and (3) is amenable to machine representation. ",
+ "neighbors": [
+ 152,
+ 448,
+ 451,
+ 557,
+ 850,
+ 1140
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1140,
+ "label": 3,
+ "text": "Title: Counterfactuals and Policy Analysis in Structural Models \nAbstract: Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, determination of liability, and policy analysis. We present a method for evaluating counter-factuals when the underlying causal model is represented by structural models a nonlinear generalization of the simultaneous equations models commonly used in econometrics and social sciences. This new method provides a coherent means for evaluating policies involving the control of variables which, prior to enacting the policy were influenced by other variables in the system. ",
+ "neighbors": [
+ 448,
+ 1106,
+ 1135,
+ 1139
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1141,
+ "label": 6,
+ "text": "Title: Malicious Membership Queries and Exceptions \nAbstract: Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, determination of liability, and policy analysis. We present a method for evaluating counter-factuals when the underlying causal model is represented by structural models a nonlinear generalization of the simultaneous equations models commonly used in econometrics and social sciences. This new method provides a coherent means for evaluating policies involving the control of variables which, prior to enacting the policy were influenced by other variables in the system. ",
+ "neighbors": [
+ 767,
+ 1214
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1142,
+ "label": 5,
+ "text": "Title: K unstliche Intelligenz Grdt: Enhancing Model-Based Learning for Its Application in Robot Navigation \nAbstract: The emergence of generalist and specialist behavior in populations of neural networks is studied. Energy extracting ability is included as a property of an organism. In artificial life simulations with organisms living in an environment, the fitness score can be interpreted as the combination of an organisms behavior and the ability of the organism to extract energy from potential food sources distributed in the environment. The energy extracting ability is viewed as an evolvable trait of organisms a particular organism's mechanisms for extracting energy from the environment and, therefore, it is not fixed and decided by the researcher. Simulations with fixed and evolvable energy extracting abilities show that the energy extracting mechanism, the sensory apparatus, and the behavior of organisms may co-evolve and be co-adapted. The results suggest that populations of organisms evolve to be generalists or specialists due to individual energy extracting abilities.",
+ "neighbors": [
+ 198,
+ 374,
+ 1085
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1143,
+ "label": 1,
+ "text": "Title: Adapting Control Strategies for Situated Autonomous Agents \nAbstract: This paper studies how to balance evolutionary design and human expertise in order to best design situated autonomous agents which can learn specific tasks. A genetic algorithm designs control circuits to learn simple behaviors, and given control strategies for simple behaviors, the genetic algorithm designs a combinational circuit that switches between these simple behaviors to perform a navigation task. Keywords: Genetic Algorithms, Computational Design, Autonomous Agents, Robotics. ",
+ "neighbors": [
+ 91,
+ 372,
+ 491,
+ 1156
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1144,
+ "label": 4,
+ "text": "Title: The Role of the Trainer in Reinforcement Learning \nAbstract: In this paper we propose a threestage incremental approach to the development of autonomous agents. We discuss some issues about the characteristics which differentiate reinforcement programs (RPs), and define the trainer as a particular kind of RP. We present a set of results obtained running experiments with a trainer which provides guidance to the AutonoMouse, our mousesized autonomous robot. ",
+ "neighbors": [
+ 372,
+ 878,
+ 1346
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1145,
+ "label": 1,
+ "text": "Title: The Troubling Aspects of a Building Block Hypothesis for Genetic Programming \nAbstract: In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a \"classical\" Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior.",
+ "neighbors": [
+ 68,
+ 91,
+ 575,
+ 707,
+ 766,
+ 941,
+ 952,
+ 1047,
+ 1105,
+ 1157,
+ 1174,
+ 1177,
+ 1221
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1146,
+ "label": 2,
+ "text": "Title: Statistical Mechanics of Nonlinear Nonequilibrium Financial Markets: Applications to Optimized Trading \nAbstract: A paradigm of statistical mechanics of financial markets (SMFM) using nonlinear nonequilibrium algorithms, first published in L. Ingber, Mathematical Modelling, 5, 343-361 (1984), is fit to multi-variate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. ",
+ "neighbors": [
+ 979,
+ 982,
+ 983,
+ 1291,
+ 1304
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1147,
+ "label": 4,
+ "text": "Title: Scaling Reinforcement Learning Algorithms by Learning Variable Temporal Resolution Models \nAbstract: The close connection between reinforcement learning (RL) algorithms and dynamic programming algorithms has fueled research on RL within the machine learning community. Yet, despite increased theoretical understanding, RL algorithms remain applicable to simple tasks only. In this paper I use the abstract framework afforded by the connection to dynamic programming to discuss the scaling issues faced by RL researchers. I focus on learning agents that have to learn to solve multiple structured RL tasks in the same environment. I propose learning abstract environment models where the abstract actions represent \"intentions\" of achieving a particular state. Such models are variable temporal resolution models because in different parts of the state space the abstract actions span different number of time steps. The operational definitions of abstract actions can be learned incrementally using repeated experience at solving RL tasks. I prove that under certain conditions solutions to new RL tasks can be found by using simu lated experience with abstract actions alone.",
+ "neighbors": [
+ 187,
+ 1128
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1148,
+ "label": 6,
+ "text": "Title: Oblivious Decision Trees, Graphs, and Top-Down Pruning \nAbstract: We describe a supervised learning algorithm, EODG, that uses mutual information to build an oblivious decision tree. The tree is then converted to an Oblivious read-Once Decision Graph (OODG) by merging nodes at the same level of the tree. For domains that are appropriate for both decision trees and OODGs, performance is approximately the same as that of C4.5, but the number of nodes in the OODG is much smaller. The merging phase that converts the oblivious decision tree to an OODG provides a new way of dealing with the replication problem and a new pruning mechanism that works top down starting from the root. The pruning mechanism is well suited for finding symmetries and aids in recovering from splits on irrelevant features that may happen during the tree construction.",
+ "neighbors": [
+ 1302
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1149,
+ "label": 2,
+ "text": "Title: \"UNIVERSAL\" CONSTRUCTION OF ARTSTEIN'S THEOREM ON NONLINEAR STABILIZATION 1 \nAbstract: Report SYCON-89-03 ABSTRACT This note presents an explicit proof of the theorem -due to Artstein- which states that the existence of a smooth control-Lyapunov function implies smooth stabilizability. More- over, the result is extended to the real-analytic and rational cases as well. The proof uses a \"universal\" formula given by an algebraic function of Lie derivatives; this formula originates in the solution of a simple Riccati equation. ",
+ "neighbors": [
+ 305,
+ 1201
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1150,
+ "label": 5,
+ "text": "Title: Stage Scheduling: A Technique to Reduce the Register Requirements of a Modulo Schedule \nAbstract: Modulo scheduling is an efficient technique for exploiting instruction level parallelism in a variety of loops, resulting in high performance code but increased register requirements. We present a set of low computational complexity stage-scheduling heuristics that reduce the register requirements of a given modulo schedule by shifting operations by multiples of II cycles. Measurements on a benchmark suite of 1289 loops from the Perfect Club, SPEC-89, and the Livermore Fortran Kernels shows that our best heuristic achieves on average 99% of the decrease in register requirements obtained by an optimal stage scheduler. ",
+ "neighbors": [
+ 1223
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1151,
+ "label": 1,
+ "text": "Title: Effects of Occam's Razor in Evolving Sigma-Pi Neural Nets \nAbstract: Several evolutionary algorithms make use of hierarchical representations of variable size rather than linear strings of fixed length. Variable complexity of the structures provides an additional representational power which may widen the application domain of evolutionary algorithms. The price for this is, however, that the search space is open-ended and solutions may grow to arbitrarily large size. In this paper we study the effects of structural complexity of the solutions on their generalization performance by analyzing the fitness landscape of sigma-pi neural networks. The analysis suggests that smaller networks achieve, on average, better generalization accuracy than larger ones, thus confirming the usefulness of Occam's razor. A simple method for implementing the Occam's razor principle is described and shown to be effective in improv ing the generalization accuracy without limiting their learning capacity.",
+ "neighbors": [
+ 91,
+ 218,
+ 543
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1152,
+ "label": 6,
+ "text": "Title: MLC Tutorial A Machine Learning library of C classes. \nAbstract: Several evolutionary algorithms make use of hierarchical representations of variable size rather than linear strings of fixed length. Variable complexity of the structures provides an additional representational power which may widen the application domain of evolutionary algorithms. The price for this is, however, that the search space is open-ended and solutions may grow to arbitrarily large size. In this paper we study the effects of structural complexity of the solutions on their generalization performance by analyzing the fitness landscape of sigma-pi neural networks. The analysis suggests that smaller networks achieve, on average, better generalization accuracy than larger ones, thus confirming the usefulness of Occam's razor. A simple method for implementing the Occam's razor principle is described and shown to be effective in improv ing the generalization accuracy without limiting their learning capacity.",
+ "neighbors": [
+ 242,
+ 1210
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1153,
+ "label": 1,
+ "text": "Title: Adaptation in constant utility non-stationary environments \nAbstract: Environments that vary over time present a fundamental problem to adaptive systems. Although in the worst case there is no hope of effective adaptation, some forms environmental variability do provide adaptive opportunities. We consider a broad class of non-stationary environments, those which combine a variable result function with an invariant utility function, and demonstrate via simulation that an adaptive strategy employing both evolution and learning can tolerate a much higher rate of environmental variation than an evolution-only strategy. We suggest that in many cases where stability has previously been assumed, the constant utility non-stationary environment may in fact be a more powerful viewpoint.",
+ "neighbors": [
+ 91,
+ 1059
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1154,
+ "label": 1,
+ "text": "Title: An Evolutionary Approach to Combinatorial Optimization Problems \nAbstract: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging. ",
+ "neighbors": [
+ 91,
+ 731,
+ 876,
+ 1064
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1155,
+ "label": 2,
+ "text": "Title: CuPit-2: Portable and Efficient High-Level Parallel Programming of Neural Networks for the Systems Analysis Modelling\nAbstract: CuPit-2 is a special-purpose programming language designed for expressing dynamic neural network learning algorithms. It provides most of the flexibility of general-purpose languages such as C or C ++ , but is more expressive. It allows writing much clearer and more elegant programs, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). In contrast to other languages, CuPit-2 programs can be compiled into efficient code for parallel machines without any changes in the source program, thus providing an easy start for using parallel platforms. This article analyzes the circumstances under which the CuPit-2 approach is the most useful one, presents a description of most language constructs and reports performance results for CuPit-2 on symmetric multiprocessors (SMPs). It concludes that in many cases CuPit-2 is a good basis for neural learning algorithm research on small-scale parallel machines. ",
+ "neighbors": [
+ 510,
+ 789,
+ 1237,
+ 1239
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1156,
+ "label": 1,
+ "text": "Title: University of Nevada Reno Design Strategies for Evolutionary Robotics \nAbstract: CuPit-2 is a special-purpose programming language designed for expressing dynamic neural network learning algorithms. It provides most of the flexibility of general-purpose languages such as C or C ++ , but is more expressive. It allows writing much clearer and more elegant programs, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). In contrast to other languages, CuPit-2 programs can be compiled into efficient code for parallel machines without any changes in the source program, thus providing an easy start for using parallel platforms. This article analyzes the circumstances under which the CuPit-2 approach is the most useful one, presents a description of most language constructs and reports performance results for CuPit-2 on symmetric multiprocessors (SMPs). It concludes that in many cases CuPit-2 is a good basis for neural learning algorithm research on small-scale parallel machines. ",
+ "neighbors": [
+ 91,
+ 372,
+ 491,
+ 1143
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1157,
+ "label": 1,
+ "text": "Title: Why Ants are Hard genetic programming, simulated annealing and hill climbing performance is shown not\nAbstract: The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Analysis of shorter solutions shows they have many of the characteristics often ascribed to manually coded programs. Enumeration of a small fraction of the total search space and random sampling characterise it as rugged with many multiple plateaus split by deep valleys and many local and global optima. This suggests it is difficult for hill climbing algorithms. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. These suggest the Ant problem is difficult for Genetic Algorithms. Random sampling of the program search space suggests on average the density of global optima changes only slowly with program size but the density of neutral networks linking points of the same fitness grows approximately linearly with program length. This is part of the cause of bloat. ",
+ "neighbors": [
+ 1034,
+ 1145,
+ 1178
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1158,
+ "label": 2,
+ "text": "Title: ANALYSIS OF SOUND TEXTURES IN MUSICAL AND MACHINE SOUNDS BY MEANS OF HIGHER ORDER STATISTICAL FEATURES. \nAbstract: In this paper we describe a sound classification method, which seems to be applicable to a broad domain of stationary, non-musical sounds, such as machine noises and other man made non periodic sounds. The method is based on matching higher order spectra (HOS) of the acoustic signals and it generalizes our earlier results on classification of sustained musical sounds by higher order statistics. An efficient \"decorrelated matched filter\" implemetation is presented. The results show good sound classification statistics and a comparison to spectral matching methods is also discussed. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1159,
+ "label": 5,
+ "text": "Title: Generating Declarative Language Bias for Top-Down ILP Algorithms \nAbstract: Many of today's algorithms for Inductive Logic Programming (ILP) put a heavy burden and responsibility on the user, because their declarative bias have to be defined in a rather low-level fashion. To address this issue, we developed a method for generating declarative language bias for top-down ILP systems from high-level declarations. The key feature of our approach is the distinction between a user level and an expert level of language bias declarations. The expert provides abstract meta-declarations, and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. The suggested languages allow for compact and abstract specifications of the declarative language bias for top-down ILP systems using schemata. We verified several properties of the translation algorithm that generates schemata, and applied it successfully to a few chemical domains. As a consequence, we propose to use a two-level approach to generate declarative language bias.",
+ "neighbors": [
+ 708,
+ 1120,
+ 1189
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1160,
+ "label": 2,
+ "text": "Title: L 0 |The First Four Years Abstract A summary of the progress and plans of\nAbstract: Most of KDD applications consider databases as static objects, and however many databases are inherently temporal, i.e., they store the evolution of each object with the passage of time. Thus, regularities about the dynamics of these databases cannot be discovered as the current state might depend in some way on the previous states. To this end, a pre-processing of data is needed aimed at extracting relationships intimately connected to the temporal nature of data that will be make available to the discovery algorithm. The predicate logic language of ILP methods together with the recent advances as to ef ficiency makes them adequate for this task.",
+ "neighbors": [
+ 1078,
+ 1091,
+ 1207
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1161,
+ "label": 1,
+ "text": "Title: The Automatic Programming of Agents that Learn Mental Models and Create Simple Plans of Action \nAbstract: An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction ",
+ "neighbors": [
+ 70,
+ 168,
+ 788,
+ 1047,
+ 1165,
+ 1295,
+ 1312
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1162,
+ "label": 3,
+ "text": "Title: Reasoning about Time and Probability \nAbstract: An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction ",
+ "neighbors": [
+ 328,
+ 811,
+ 850
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1163,
+ "label": 4,
+ "text": "Title: Multi-Time Models for Reinforcement Learning \nAbstract: Reinforcement learning can be used not only to predict rewards, but also to predict states, i.e. to learn a model of the world's dynamics. Models can be defined at different levels of temporal abstraction. Multi-time models are models that focus on predicting what will happen, rather than when a certain event will take place. Based on multi-time models, we can define abstract actions, which enable planning (presumably in a more efficient way) at various levels of abstraction.",
+ "neighbors": [
+ 1053,
+ 1128
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1164,
+ "label": 2,
+ "text": "Title: Design of Optimization Criteria for Multiple Sequence Alignment \nAbstract: DIMACS Technical Report 96-53 January 1997 ",
+ "neighbors": [
+ 998
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1165,
+ "label": 1,
+ "text": "Title: Simultaneous Evolution of Programs and their Control Structures Simultaneous Evolution of Programs and their Control\nAbstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k 2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method| like any form of voting or committee|can reduce the variance of the learning algorithm. Furthermore|unlike methods that simply combine multiple runs of the same learning algorithm|ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local be havior of C4.5.",
+ "neighbors": [
+ 1161
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1166,
+ "label": 2,
+ "text": "Title: The EM Algorithm for Mixtures of Factor Analyzers \nAbstract: Technical Report CRG-TR-96-1 May 21, 1996 (revised Feb 27, 1997) Abstract Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact Expectation-Maximization algorithm for fitting the parameters of this mixture of factor analyzers.",
+ "neighbors": [
+ 387,
+ 1039,
+ 1061,
+ 1102,
+ 1234
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1167,
+ "label": 2,
+ "text": "Title: Modeling dynamic receptive field changes in primary visual cortex using inhibitory learning \nAbstract: The position, size, and shape of the visual receptive field (RF) of some primary visual cortical neurons change dynamically, in response to artificial scotoma conditioning in cats (Pettet & Gilbert, 1992) and to retinal lesions in cats and monkeys (Darian-Smith & Gilbert, 1995). The \"EXIN\" learning rules (Marshall, 1995) are used to model dynamic RF changes. The EXIN model is compared with an adaptation model (Xing & Gerstein, 1994) and the LISSOM model (Sirosh & Miikkulainen, 1994; Sirosh et al., 1996). To emphasize the role of the lateral inhibitory learning rules, the EXIN and the LISSOM simulations were done with only lateral inhibitory learning. During scotoma conditioning, the EXIN model without feedforward learning produces centrifugal expansion of RFs initially inside the scotoma region, accompanied by increased responsiveness, without changes in spontaneous activation. The EXIN model without feedforward learning is more consistent with the neurophysiological data than are the adaptation model and the LISSOM model. The comparison between the EXIN and the LISSOM models suggests experiments to determine the role of feedforward excitatory and lateral inhibitory learning in producing dynamic RF changes during scotoma conditioning. ",
+ "neighbors": [
+ 69,
+ 620,
+ 1099
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1168,
+ "label": 5,
+ "text": "Title: Bottom-up induction of logic programs with more than one recursive clause \nAbstract: In this paper we present a bottom-up algorithm called MRI to induce logic programs from their examples. This method can induce programs with a base clause and more than one recursive clause from a very small number of examples. MRI is based on the analysis of saturations of examples. It first generates a path structure, which is an expression of a stream of values processed by predicates. The concept of path structure was originally introduced by Identam-Almquist and used in TIM [ Idestam-Almquist, 1996 ] . In this paper, we introduce the concepts of extension and difference of path structure. Recursive clauses can be expressed as a difference between a path structure and its extension. The paper presents the algorithm and shows experimental results obtained by the method.",
+ "neighbors": [
+ 198,
+ 796
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1169,
+ "label": 2,
+ "text": "Title: An unsupervised neural network for low-level control of a wheeled mobile robot: noise resistance, stability,\nAbstract: We have recently introduced a neural network mobile robot controller (NETMORC) that autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. After an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. In addition, the forward odometric map allows the robot to reach targets in the absence of sensory feedback. The controller is also able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we review the NETMORC architecture and describe its simplified algorithmic implementation, we present new, quantitative results on NETMORC's performance and adaptability under noise-free and noisy conditions, we compare NETMORC's performance on a trajectory-following task with the performance of an alternative controller, and we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER. ",
+ "neighbors": [
+ 372
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1170,
+ "label": 2,
+ "text": "Title: Predicting Conditional Probability Distributions: A Connectionist Approach \nAbstract: Most traditional prediction techniques deliver the mean of the probability distribution (a single point). For multimodal processes, instead of predicting the mean of the probability distribution, it is important to predict the full distribution. This article presents a new connectionist method to predict the conditional probability distribution in response to an input. The main idea is to transform the problem from a regression to a classification problem. The conditional probability distribution network can perform both direct predictions and iterated predictions, a task which is specific for time series problems. We compare our method to fuzzy logic and discuss important differences, and also demonstrate the architecture on two time series. The first is the benchmark laser series used in the Santa Fe competition, a deterministic chaotic system. The second is a time series from a Markov process which exhibits structure on two time scales. The network produces multimodal predictions for this series. We compare the predictions of the network with a nearest-neighbor predictor and find that the conditional probability network is more than twice as likely a model.",
+ "neighbors": [
+ 343,
+ 768,
+ 1241,
+ 1242,
+ 1284
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1171,
+ "label": 2,
+ "text": "Title: Using Precepts to Augment Training Set Learning an input whose value is don't-care in some\nAbstract: are used in turn to approximate A. Empirical studies show that good results can be achieved with TSL [8, 11]. However, TSL has several drawbacks. Training set learners (e.g., backpropagation) are typically slow as they may require many passes over the training set. Also, there is no guarantee that, given an arbitrary training set, the system will find enough good critical features to get a reasonable approximation of A. Moreover, the number of features to be searched is exponential in the number of inputs, and TSL becomes computationally expensive [1]. Finally, the scarcity of interesting positive theoretical results suggests the difficulty of learning without sufficient a priori knowledge. The goal of learning systems is to generalize. Generalization is commonly based on the set of critical features the system has available. Training set learners typically extract critical features from a random set of examples. While this approach is attractive, it suffers from the exponential growth of the number of features to be searched. We propose to extend it by endowing the system with some a priori knowledge, in the form of precepts. Advantages of the augmented system are speedup, improved generalization, and greater parsimony. This paper presents a precept-driven learning algorithm. Its main features include: 1) distributed implementation, 2) bounded learning and execution times, and 3) ability to handle both correct and incorrect precepts. Results of simulations on real-world data demonstrate promise. This paper presents precept-driven learning (PDL). PDL is intended to overcome some of TSL's weaknesses. In PDL, the training set is augmented by a small set of precepts. A pair p = (i, o) in I O is called an example. A precept is an example in which some of the i-entries (inputs) are set to the special value don't-care. An input whose value is not don't-care is said to be asserted. If i has no effect on the value of the output. The use of the special value don't-care is therefore as a shorthand. A pair containing don't-care inputs represents as many examples as the product of the sizes of the input domains of its don't-care inputs. 1. Introduction ",
+ "neighbors": [
+ 481
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1172,
+ "label": 6,
+ "text": "Title: Learning to model sequences generated by switching distributions \nAbstract: We study efficient algorithms for solving the following problem, which we call the switching distributions learning problem. A sequence S = 1 2 : : : n , over a finite alphabet S is generated in the following way. The sequence is a concatenation of K runs, each of which is a consecutive subsequence. Each run is generated by independent random draws from a distribution ~p i over S, where ~p i is an element in a set of distributions f~p 1 ; : : : ; ~p N g. The learning algorithm is given this sequence and its goal is to find approximations of the distributions ~p 1 ; : : : ; ~p N , and give an approximate segmentation of the sequence into its constituting runs. We give an efficient algorithm for solving this problem and show conditions under which the algorithm is guaranteed to work with high probability.",
+ "neighbors": [
+ 1216
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1173,
+ "label": 2,
+ "text": "Title: A Connectionist Architecture with Inherent Systematicity \nAbstract: For connectionist networks to be adequate for higher level cognitive activities such as natural language interpretation, they have to generalize in a way that is appropriate given the regularities of the domain. Fodor and Pylyshyn (1988) identified an important pattern of regularities in such domains, which they called systematicity. Several attempts have been made to show that connectionist networks can generalize in accordance with these regularities, but not to the satisfaction of the critics. To address this challenge, this paper starts by establishing the implications of systematicity for connectionist solutions to the variable binding problem. Based on the work of Hadley (1994a), we argue that the network must generalize information it learns in one variable binding to other variable bindings. We then show that temporal synchrony variable binding (Shas-tri and Ajjanagadde, 1993) inherently generalizes in this way. Thereby we show that temporal synchrony variable binding is a connectionist architecture that accounts for systematicity. This is an important step in showing that connectionism can be an adequate architecture for higher level cognition. ",
+ "neighbors": [
+ 1179,
+ 1352
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1174,
+ "label": 1,
+ "text": "Title: The Impact of External Dependency in Genetic Programming Primitives \nAbstract: Both control and data dependencies among primitives impact the behavioural consistency of subprograms in genetic programming solutions. Behavioural consistency in turn impacts the ability of genetic programming to identify and promote appropriate subprograms. We present the results of modelling dependency through a parameterized problem in which a subprogram exhibits internal and external dependency levels that change as the subprogram is successively combined into larger subsolutions. We find that the key difference between non-existent and \"full\" external dependency is a longer time to solution identification and a lower likelihood of success as shown by increased difficulty in identifying and promoting correct subprograms. ",
+ "neighbors": [
+ 941,
+ 1047,
+ 1145,
+ 1182
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1175,
+ "label": 2,
+ "text": "Title: Improved Center Point Selection for Probabilistic Neural Networks \nAbstract: Probabilistic Neural Networks (PNN) typically learn more quickly than many neural network models and have had success on a variety of applications. However, in their basic form, they tend to have a large number of hidden nodes. One common solution to this problem is to keep only a randomly-selected subset of the original training data in building the network. This paper presents an algorithm called the Reduced Probabilistic Neural Network (RPNN) that seeks to choose a better-than-random subset of the available instances to use as center points of nodes in the network. The algorithm tends to retain non-noisy border points while removing nodes with instances in regions of the input space that are highly homogeneous. In experiments on 22 datasets, the RPNN had better average generalization accuracy than two other PNN models, while requiring an average of less than one-third the number of nodes. ",
+ "neighbors": [
+ 1310
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1176,
+ "label": 1,
+ "text": "Title: Real-time Interactive Neuro-evolution \nAbstract: In standard neuro-evolution, a population of networks is evolved in the task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone. ",
+ "neighbors": [
+ 10,
+ 140
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1177,
+ "label": 1,
+ "text": "Title: An Experimental Analysis of Schema Creation, Propagation and Disruption in Genetic Programming \nAbstract: In this paper we first review the main results in the theory of schemata in Genetic Programming (GP) and summarise a new GP schema theory which is based on a new definition of schema. Then we study the creation, propagation and disruption of this new form of schemata in real runs, for standard crossover, one-point crossover and selection only. Finally, we discuss these results in the light our GP schema theorem. ",
+ "neighbors": [
+ 91,
+ 707,
+ 1145,
+ 1178,
+ 1182
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1178,
+ "label": 1,
+ "text": "Title: Genetic Programming with One-Point Crossover and Point Mutation \nAbstract: Technical Report: CSRP-97-13 April 1997 Abstract In recent theoretical and experimental work on schemata in genetic programming we have proposed a new simpler form of crossover in which the same crossover point is selected in both parent programs. We call this operator one-point crossover because of its similarity with the corresponding operator in genetic algorithms. One point crossover presents very interesting properties from the theory point of view. In this paper we describe this form of crossover as well as a new variant called strict one-point crossover highlighting their useful theoretical and practical features. We also present experimental evidence which shows that one-point crossover compares favourably with standard crossover.",
+ "neighbors": [
+ 952,
+ 1105,
+ 1157,
+ 1177
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1179,
+ "label": 2,
+ "text": "Title: A Connectionist Architecture for Learning to Parse \nAbstract: We present a connectionist architecture and demonstrate that it can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist architectures. We apply these Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. ",
+ "neighbors": [
+ 1173,
+ 1352
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1180,
+ "label": 1,
+ "text": "Title: Evolutionary Computation in Air Traffic Control Planning \nAbstract: Air Traffic Control is involved in the real-time planning of aircraft trajectories. This is a heavily constrained optimization problem. We concentrate on free-route planning, in which aircraft are not required to fly over way points. The choice of a proper representation for this real-world problem is non-trivial. We propose a two level representation: one level on which the evolutionary operators work, and a derived level on which we do calculations. Furthermore we show that a specific choice of the fitness function is important for finding good solutions to large problem instances. We use a hybrid approach in the sense that we use knowledge about air traffic control by using a number of heuristics. We have built a prototype of a planning tool, and this resulted in a flexible tool for generating a free-route planning of low cost, for a number of aircraft. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1181,
+ "label": 2,
+ "text": "Title: Using generative models for handwritten digit recognition \nAbstract: Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on nonlinear chromosomes (parse trees) representing programs and an interpreter which can run the programs being optimised. This paper describes PDGP (Parallel Distributed Genetic Programming), a new form of genetic programming which is suitable for the development of fine-grained parallel programs. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some preliminary results obtained with this paradigm. ",
+ "neighbors": [
+ 274,
+ 387
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1182,
+ "label": 1,
+ "text": "Title: How Fitness Structure Affects Subsolution Acquisition in Genetic Programming \nAbstract: We define fitness structure in genetic programming to be the mapping between the subprograms of a program and their respective fitness values. This paper shows how various fitness structures of a problem with independent subsolutions relate to the acquisition of sub-solutions. The rate of subsolution acquisition is found to be directly correlated with fitness structure whether that structure is uniform, linear or exponential. An understanding of fitness structure provides partial insight into the complicated relationship between fitness function and the outcome of genetic programming's search.",
+ "neighbors": [
+ 1174,
+ 1177
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1183,
+ "label": 2,
+ "text": "Title: Rapid learning of binding-match and binding-error detector circuits via long-term potentiation \nAbstract: It is argued that the memorization of events and situations (episodic memory) requires the rapid formation of neural circuits responsive to binding errors and binding matches. While the formation of circuits responsive to binding matches can be modeled by associative learning mechanisms, the rapid formation of circuits responsive to binding errors is difficult to explain given their seemingly paradoxical behavior; such a circuit must be formed in response to the occurrence of a binding (i.e., a particular pattern in the input), but subsequent to its formation, it must not fire anymore in response to the occurrence of the very binding (i.e., pattern) that led to its formation. A plausible account of the formation of such circuits has not been offered. A computational model is described that demonstrates how a transient pattern of activity representing an event can lead to the rapid formation of circuits for detecting bindings and binding errors as a result of long-term potentiation within structures whose architecture and circuitry are similar to those of the hippocampal formation, a neural structure known to be critical to episodic memory. The model exhibits a high memory capacity and is robust against limited amounts of diffuse cell loss. The model also offers an alternate interpretation of the functional role of region CA3 in the formation of episodic memories, and predicts the nature of memory impairment that would result from damage to various regions of the hippocampal formation. ",
+ "neighbors": [
+ 662,
+ 1012
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1184,
+ "label": 6,
+ "text": "Title: Learning Harmonic Progression Using Markov Models EECS545 Project \nAbstract: It is argued that the memorization of events and situations (episodic memory) requires the rapid formation of neural circuits responsive to binding errors and binding matches. While the formation of circuits responsive to binding matches can be modeled by associative learning mechanisms, the rapid formation of circuits responsive to binding errors is difficult to explain given their seemingly paradoxical behavior; such a circuit must be formed in response to the occurrence of a binding (i.e., a particular pattern in the input), but subsequent to its formation, it must not fire anymore in response to the occurrence of the very binding (i.e., pattern) that led to its formation. A plausible account of the formation of such circuits has not been offered. A computational model is described that demonstrates how a transient pattern of activity representing an event can lead to the rapid formation of circuits for detecting bindings and binding errors as a result of long-term potentiation within structures whose architecture and circuitry are similar to those of the hippocampal formation, a neural structure known to be critical to episodic memory. The model exhibits a high memory capacity and is robust against limited amounts of diffuse cell loss. The model also offers an alternate interpretation of the functional role of region CA3 in the formation of episodic memories, and predicts the nature of memory impairment that would result from damage to various regions of the hippocampal formation. ",
+ "neighbors": [
+ 1220
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1185,
+ "label": 1,
+ "text": "Title: Discovery of Symbolic, Neuro-Symbolic and Neural Networks with Parallel Distributed Genetic Programming \nAbstract: Technical Report: CSRP-96-14 August 1996 Abstract Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on parse trees representing programs and an interpreter which can run the programs being optimised. This paper describes Parallel Distributed Genetic Programming (PDGP), a new form of genetic programming which is suitable for the development of parallel programs in which symbolic and neural processing elements can be combined a in free and natural way. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some results obtained with the exclusive-or problem. ",
+ "neighbors": [
+ 717,
+ 1043,
+ 1325
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1186,
+ "label": 2,
+ "text": "Title: Routing in Optical Multistage Interconnection Networks: a Neural Network Solution \nAbstract: There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of any routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for a broad range of optical multistage interconnection networks (OMINs). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs can be used as communication media for distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme can be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. Although the ANN does not always generate the best solution, the parallel nature of the ANN computation may make this routing scheme faster than conventional routing approaches, especially for OMINs that have an irregular structure. Furthermore, the ANN router is fault-tolerant. Results are shown for generating routes in a 16 fi 16, 3-stage OMIN.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1187,
+ "label": 2,
+ "text": "Title: GENE REGULATION AND BIOLOGICAL DEVELOPMENT IN NEURAL NETWORKS: AN EXPLORATORY MODEL \nAbstract: In this paper we explore the distributed database allocation problem, which is intractable. We also discuss genetic algorithms and how they have been used successfully to solve combinatorial problems. Our experimental results show the GA to be far superior to the greedy heuristic in obtaining optimal and near optimal fragment placements for the allocation problem with various data sets.",
+ "neighbors": [
+ 1249
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1188,
+ "label": 0,
+ "text": "Title: An Interactive Planning Architecture The Forest Fire Fighting case \nAbstract: This paper describes an interactive planning system that was developed inside an Intelligent Decision Support System aimed at supporting an operator when planning the initial attack to forest fires. The planning architecture rests on the integration of case-based reasoning techniques with constraint reasoning techniques exploited, mainly, for performing temporal reasoning on temporal metric information. Temporal reasoning plays a central role in supporting interactive functions that are provided to the user when performing two basic steps of the planning process: plan adaptation and resource scheduling. A first prototype was integrated with a situation assessment and a resource allocation manager subsystem and is currently being tested.",
+ "neighbors": [
+ 989
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1189,
+ "label": 5,
+ "text": "Title: A Comparison of Pruning Methods for Relational Concept Learning \nAbstract: Pre-Pruning and Post-Pruning are two standard methods of dealing with noise in concept learning. Pre-Pruning methods are very efficient, while Post-Pruning methods typically are more accurate, but much slower, because they have to generate an overly specific concept description first. We have experimented with a variety of pruning methods, including two new methods that try to combine and integrate pre- and post-pruning in order to achieve both accuracy and efficiency. This is verified with test series in a chess position classification task.",
+ "neighbors": [
+ 198,
+ 217,
+ 342,
+ 716,
+ 1159,
+ 1190
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1190,
+ "label": 5,
+ "text": "Title: Top-Down Pruning in Relational Learning \nAbstract: Pruning is an effective method for dealing with noise in Machine Learning. Recently pruning algorithms, in particular Reduced Error Pruning, have also attracted interest in the field of Inductive Logic Programming. However, it has been shown that these methods can be very inefficient, because most of the time is wasted for generating clauses that explain noisy examples and subsequently pruning these clauses. We introduce a new method which searches for good theories in a top-down fashion to get a better starting point for the pruning algorithm. Experiments show that this approach can significantly lower the complexity of the task without losing predictive accuracy. ",
+ "neighbors": [
+ 198,
+ 217,
+ 342,
+ 716,
+ 1189
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1191,
+ "label": 3,
+ "text": "Title: Logarithmic-Time Updates and Queries in Probabilistic Networks \nAbstract: Traditional databases commonly support efficient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a first step toward dynamic reasoning in probabilistic databases with comparable efficiency. We propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in time O(1) and queries are processed in time O(N ), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N ) at the expense of O(log N ) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We briefly discuss a potential application of dynamic probabilistic reasoning in computational biology.",
+ "neighbors": [
+ 630,
+ 1123
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1192,
+ "label": 3,
+ "text": "Title: Localized Partial Evaluation of Belief Networks \nAbstract: in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are \"too far away\" from the queried node to have much impact on its value. LPE has the \"anytime\" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network.",
+ "neighbors": [
+ 1046
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1193,
+ "label": 0,
+ "text": "Title: Cooperative Bayesian and Case-Based Reasoning for Solving Multiagent Planning Tasks \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
+ "neighbors": [
+ 37,
+ 378,
+ 1230
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1194,
+ "label": 1,
+ "text": "Title: Diplomarbeit A Genetic Algorithm for the Topological Optimization of Neural Networks \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
+ "neighbors": [
+ 91,
+ 240,
+ 510,
+ 1338
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1195,
+ "label": 1,
+ "text": "Title: TECHNIQUES FOR REDUCING THE DISRUPTION OF SUPERIOR BUILDING BLOCKS IN GENETIC ALGORITHMS \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
+ "neighbors": [
+ 78,
+ 91
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1196,
+ "label": 0,
+ "text": "Title: Case Retrieval Nets Applied to Large Case Bases \nAbstract: This article presents some experimental results obtained from applying the Case Retrieval Net approach to large case bases. The obtained results suggest that CRNs can successfully handle case bases larger than considered in other reports.",
+ "neighbors": [
+ 41,
+ 1004,
+ 1005
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1197,
+ "label": 1,
+ "text": "Title: Genes, Phenes and the Baldwin Effect: Learning and Evolution in a Simulated Population \nAbstract: The Baldwin Effect, first proposed in the late nineteenth century, suggests that the course of evolutionary change can be influenced by individually learned behavior. The existence of this effect is still a hotly debated topic. In this paper clear evidence is presented that learning-based plasticity at the phenotypic level can and does produce directed changes at the genotypic level. This research confirms earlier experimental work done by others, notably Hinton & Nowlan (1987). Further, the amount of plasticity of the learned behavior is shown to be crucial to the size of the Baldwin Effect: either too little or too much and the effect disappears or is significantly reduced. Finally, for learnable traits, the case is made that over many generations it will become easier for the population as a whole to learn these traits (i.e. the phenotypic plasticity of these traits will increase). In this gradual transition from a genetically driven population to one driven by learning, the importance of the Baldwin Effect decreases. ",
+ "neighbors": [
+ 229,
+ 760
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1198,
+ "label": 0,
+ "text": "Title: Case-based reactive navigation: A case-based method for on-line selection and adaptation of reactive control parameters\nAbstract: This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including \"box canyon\" environments known to be problematic for reactive control systems in general. fl Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, Geor gia, 1992. ",
+ "neighbors": [
+ 500,
+ 1088
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1199,
+ "label": 5,
+ "text": "Title: Theory-Guided Induction of Logic Programs by Inference of Regular Languages recursive clauses. merlin on the\nAbstract: resent allowed sequences of resolution steps for the initial theory. There are, however, many characterizations of allowed sequences of resolution steps that cannot be expressed by a set of resolvents. One approach to this problem is presented, the system mer-lin, which is based on an earlier technique for learning finite-state automata that represent allowed sequences of resolution steps. merlin extends the previous technique in three ways: i) negative examples are considered in addition to positive examples, ii) a new strategy for performing generalization is used, and iii) a technique for converting the learned automaton to a logic program is included. Results from experiments are presented in which merlin outperforms both a system using the old strategy for performing generalization, and a traditional covering technique. The latter result can be explained by the limited expressiveness of hypotheses produced by covering and also by the fact that covering needs to produce the correct base clauses for a recursive definition before ",
+ "neighbors": [
+ 299,
+ 616,
+ 708
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1200,
+ "label": 3,
+ "text": "Title: EXACT TRANSITION PROBABILITIES FOR THE INDEPENDENCE METROPOLIS SAMPLER \nAbstract: A recent result of Jun Liu's has shown how to compute explicitly the eigen-values and eigenvectors for the Markov chain derived from a special case of the Hastings sampling algorithm, known as the indepdendence Metropolis sampler. In this note, we show how to extend the result to obtain exact n-step transition probabilities for any n. This is done first for a chain on a finite state space, and then extended to a general (discrete or continuous) state space. The paper concludes with some implications for diagnostic tests of convergence of Markov chain samplers. ",
+ "neighbors": [
+ 281,
+ 1063,
+ 1130
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1201,
+ "label": 2,
+ "text": "Title: Asymptotic Controllability Implies Feedback Stabilization \nAbstract: | ",
+ "neighbors": [
+ 1149
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1202,
+ "label": 2,
+ "text": "Title: PHONETIC CLASSIFICATION OF TIMIT SEGMENTS PREPROCESSED WITH LYON'S COCHLEAR MODEL USING A SUPERVISED/UNSUPERVISED HYBRID NEURAL NETWORK \nAbstract: We report results on vowel and stop consonant recognition with tokens extracted from the TIMIT database. Our current system differs from others doing similar tasks in that we do not use any specific time normalization techniques. We use a very detailed biologically motivated input representation of the speech tokens - Lyon's cochlear model as implemented by Slaney [20]. This detailed, high dimensional representation, known as a cochleagram, is classified by either a back-propagation or by a hybrid supervised/unsupervised neural network classifier. The hybrid network is composed of a biologically motivated unsupervised network and a supervised back-propagation network. This approach produces results comparable to those obtained by others without the addition of time normalization. ",
+ "neighbors": [
+ 207,
+ 1275,
+ 1276,
+ 1277
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1203,
+ "label": 6,
+ "text": "Title: A Comparison of New and Old Algorithms for A Mixture Estimation Problem \nAbstract: We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EG ). Curiously, when a second order Taylor expansion of the relative entropy is used, we arrive at an update EM which, for = 1, gives the usual EM update. Experimentally, both the EM -update and the EG -update for > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EG algorithm. ",
+ "neighbors": [
+ 42,
+ 1076,
+ 1087
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1204,
+ "label": 6,
+ "text": "Title: Programming Research Group A LEARNABILITY MODEL FOR UNIVERSAL REPRESENTATIONS \nAbstract: This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals. ",
+ "neighbors": [
+ 392,
+ 724,
+ 796,
+ 1036
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1205,
+ "label": 1,
+ "text": "Title: A comparison of the fixed and floating building block representation in the genetic algorithm \nAbstract: This article compares the traditional, fixed problem representation style of a genetic algorithm (GA) with a new floating representation in which the building blocks of a problem are not fixed at specific locations on the individuals of the population. In addition, the effects of non-coding segments on both of these representations is studied. Non-coding segments are a computational model of non-coding DNA and floating building blocks mimic the location independence of genes. The fact that these structures are prevalent in natural genetic systems suggests that they may provide some advantages to the evolutionary process. Our results show that there is a significant difference in how GAs solve a problem in the fixed and floating representations. GAs are able to maintain a more diverse population with the floating representation. The combination of non-coding segments and floating building blocks appears to encourage a GA to take advantage of its parallel search and recombination abilities. ",
+ "neighbors": [
+ 910,
+ 941,
+ 1311,
+ 1314
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1206,
+ "label": 1,
+ "text": "Title: New Methods for Competitive Coevolution \nAbstract: We consider \"competitive coevolution,\" in which fitness is based on direct competition among individuals selected from two independently evolving populations of \"hosts\" and \"parasites.\" Competitive coevolution can lead to an \"arms race,\" in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. \"Competitive fitness sharing\" changes the way fitness is measured, \"shared sampling\" provides a method for selecting a strong, diverse set of parasites, and the \"hall of fame\" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift. ",
+ "neighbors": [
+ 119,
+ 351,
+ 887,
+ 981,
+ 999,
+ 1000
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1207,
+ "label": 2,
+ "text": "Title: Learning to segment images using dynamic feature binding an isolated object in an image is\nAbstract: Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units. ",
+ "neighbors": [
+ 1160,
+ 1316
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1208,
+ "label": 0,
+ "text": "Title: Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier \nAbstract: The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence. ",
+ "neighbors": [
+ 585,
+ 1067,
+ 1342
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1209,
+ "label": 3,
+ "text": "Title: Dynamic Belief Networks for Discrete Monitoring \nAbstract: We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem.",
+ "neighbors": [
+ 322,
+ 364,
+ 458,
+ 660,
+ 1246
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1210,
+ "label": 6,
+ "text": "Title: The Power of Decision Tables \nAbstract: We evaluate the power of decision tables as a hypothesis space for supervised learning algorithms. Decision tables are one of the simplest hypothesis spaces possible, and usually they are easy to understand. Experimental results show that on artificial and real-world domains containing only discrete features, IDTM, an algorithm inducing decision tables, can sometimes outperform state-of-the-art algorithms such as C4.5. Surprisingly, performance is quite good on some datasets with continuous features, indicating that many datasets used in machine learning either do not require these features, or that these features have few values. We also describe an incremental method for performing cross-validation that is applicable to incremental learning algorithms including IDTM. Using incremental cross-validation, it is possible to cross-validate a given dataset and IDTM in time that is linear in the number of instances, the number of features, and the number of label values. The time for incremental cross-validation is independent of the number of folds chosen, hence leave-one-out cross-validation and ten-fold cross-validation take the same time. ",
+ "neighbors": [
+ 284,
+ 712,
+ 1129,
+ 1152,
+ 1302
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1211,
+ "label": 3,
+ "text": "Title: Feature Subset Selection Using the Wrapper Method: Overfitting and Dynamic Search Space Topology \nAbstract: In the wrapper approach to feature subset selection, a search for an optimal set of features is made using the induction algorithm as a black box. The estimated future performance of the algorithm is the heuristic guiding the search. Statistical methods for feature subset selection including forward selection, backward elimination, and their stepwise variants can be viewed as simple hill-climbing techniques in the space of feature subsets. We utilize best-first search to find a good feature subset and discuss overfitting problems that may be associated with searching too many feature subsets. We introduce compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets. We show that compound operators unify previous approaches that deal with relevant and irrelevant features. The improved feature subset selection yields significant improvements for real-world datasets when using the ID3 and the Naive-Bayes induction algorithms. ",
+ "neighbors": [
+ 118,
+ 242,
+ 749,
+ 905
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1212,
+ "label": 3,
+ "text": "Title: Sequential Importance Sampling for Nonparametric Bayes Models: The Next Generation Running Title: SIS for Nonparametric Bayes \nAbstract: There are two generations of Gibbs sampling methods for semi-parametric models involving the Dirichlet process. The first generation suffered from a severe drawback; namely that the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators. Steve MacEachern is Associate Professor, Department of Statistics, Ohio State University, Merlise Clyde is Assistant Professor, Institute of Statistics and Decision Sciences, Duke University, and Jun Liu is Assistant Professor, Department of Statistics, Stanford University. The work of the second author was supported in part by the National Science Foundation grants DMS-9305699 and DMS-9626135, and that of the last author by the National Science Foundation grants DMS-9406044, DMS-9501570, and the Terman Fellowship. ",
+ "neighbors": [
+ 977
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1213,
+ "label": 2,
+ "text": "Title: No Free Lunch for Early Stopping \nAbstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.",
+ "neighbors": [
+ 1042
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1214,
+ "label": 6,
+ "text": "Title: Exact Learning of -DNF Formulas with Malicious Membership Queries \nAbstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.",
+ "neighbors": [
+ 1141
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1215,
+ "label": 6,
+ "text": "Title: The Power of Team Exploration: Two Robots Can Learn Unlabeled Directed Graphs \nAbstract: We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots, which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. ",
+ "neighbors": [
+ 255,
+ 1220,
+ 1256
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1216,
+ "label": 6,
+ "text": "Title: Learning With Unreliable Boundary Queries \nAbstract: We introduce a model for learning from examples and membership queries in situations where the boundary between positive and negative examples is somewhat ill-defined. In our model, queries near the boundary of a target concept may receive incorrect or \"don't care\" responses, and the distribution of examples has zero probability mass on the boundary region. The motivation behind our model is that in many cases the boundary between positive and negative examples is complicated or \"fuzzy.\" However, one may still hope to learn successfully, because the typical examples that one sees do not come from that region. We present several positive results in this new model. We show how to learn the intersection of two arbitrary halfspaces when membership queries near the boundary may be answered incorrectly. Our algorithm is an extension of an algorithm of Baum [7, 6] that learns the intersection of two halfspaces whose bounding planes pass through the origin in the PAC-with-membership-queries model. We also describe algorithms for learning several subclasses of monotone DNF formulas.",
+ "neighbors": [
+ 626,
+ 1172
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1217,
+ "label": 2,
+ "text": "Keyword: Running Title: Local Multivariate Binary Processors \nAbstract: We thank Sue Becker, Peter Hancock and Darragh Smyth for helpful comments on this work. The work of Dario Floreano and Bill Phillips was supported by a Network Grant from the Human Capital and Mobility Programme of the European Community. ",
+ "neighbors": [
+ 924,
+ 1276
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1218,
+ "label": 2,
+ "text": "Title: Physiological Gain Leads to High ISI Variability in a Simple Model of a Cortical Regular\nAbstract: To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky and Koch, 1993), it is critical to examine the dynamics of their neuronal integration as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick et al., 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency vs. injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory \"1= p N arguments hold and spiking is regular; after the \"memory\" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state, and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady state behavior is predominant and ISI's are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs. ",
+ "neighbors": [
+ 1278
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1219,
+ "label": 0,
+ "text": "Title: Computer-Supported Argumentation for Cooperative Design on the World-Wide Web \nAbstract: This paper describes an argumentation system for cooperative design applications on the Web. The system provides experts involved in such procedures means of expressing and weighing their individual arguments and preferences, in order to argue for or against the selection of a certain choice. It supports defeasible and qualitative reasoning in the presence of ill-structured information. Argumentation is performed through a set of discourse acts which call a variety of procedures for the propagation of information in the corresponding discussion graph. The paper also reports on the integration of Case Based Reasoning techniques, used to resolve current design issues by considering previous similar situations, and the specitcation of similarity measures between the various argumentation items, the aim being to estimate the variations among opinions of the designers involved in cooperative design. ",
+ "neighbors": [
+ 37
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1220,
+ "label": 6,
+ "text": "Title: Efficient Learning of Typical Finite Automata from Random Walks (Extended Abstract) \nAbstract: This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the \"typical\" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random. Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings. ",
+ "neighbors": [
+ 333,
+ 392,
+ 780,
+ 1073,
+ 1184,
+ 1215
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1221,
+ "label": 1,
+ "text": "Title: Program Search with a Hierarchical Variable Length Representation: Genetic Programming, Simulated Annealing and Hill Climbing \nAbstract: This paper presents a comparison of Genetic Programming(GP) with Simulated Annealing (SA) and Stochastic Iterated Hill Climbing (SIHC) based on a suite of program discovery problems which have been previously tackled only with GP. All three search algorithms employ the hierarchical variable length representation for programs brought into recent prominence with the GP paradigm [8]. We feel it is not intuitively obvious that mutation-based adaptive search can handle program discovery yet, to date, for each GP problem we have tried, SA or SIHC also work.",
+ "neighbors": [
+ 91,
+ 1145
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1222,
+ "label": 1,
+ "text": "Title: Modeling the Evolution of Motivation \nAbstract: In order for learning to improve the adaptiveness of an animal's behavior and thus direct evolution in the way Baldwin suggested, the learning mechanism must incorporate an innate evaluation of how the animal's actions influence its reproductive fitness. For example, many circumstances that damage an animal, or otherwise reduce its fitness are painful and tend to be avoided. We refer to the mechanism by which an animal evaluates the fitness consequences of its actions as a \"motivation system,\" and argue that such a system must evolve along with the behaviors it evaluates. We describe simulations of the evolution of populations of agents instantiating a number of different architectures for generating action and learning, in worlds of differing complexity. We find that in some cases, members of the populations evolve motivation systems that are accurate enough to direct learning so as to increase the fitness of the actions the agents perform. Furthermore, the motivation systems tend to incorporate systematic distortions in their representations of the worlds they inhabit; these distortions can increase the adaptiveness of the behavior generated. ",
+ "neighbors": [
+ 70,
+ 91,
+ 952,
+ 1059,
+ 1138
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1223,
+ "label": 5,
+ "text": "Title: A Reduced Multipipeline Machine Description that Preserves Scheduling Constraints \nAbstract: High performance compilers increasingly rely on accurate modeling of the machine resources to efficiently exploit the instruction level parallelism of an application. In this paper, we propose a reduced machine description that results in faster detection of resource contentions while preserving the scheduling constraints present in the original machine description. The proposed approach reduces a machine description in an automated, error-free, and efficient fashion. Moreover, it fully supports schedulers that backtrack and process operations in arbitrary order. Reduced descriptions for the DEC Alpha 21064, MIPS R3000/R3010, and Cydra 5 result in 4 to 7 times faster detection of resource contentions and require 22 to 90% of the memory storage used by the original machine descriptions. ",
+ "neighbors": [
+ 1150
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1224,
+ "label": 0,
+ "text": "Title: Case-Based Sonogram Classification \nAbstract: This report replicates and extends results reported by Naval Air Warfare Center (NAWC) personnel on the automatic classification of sonar images. They used novel case-based reasoning systems in their empirical studies, but did not obtain comparative analyses using standard classification algorithms. Therefore, the quality of the NAWC results were unknown. We replicated the NAWC studies and also tested several other classifiers (i.e., both case-based and otherwise) from the machine learning literature. These comparisons and their ramifications are detailed in this paper. Next, we investigated Fala and Walker's two suggestions for future work (i.e., on combining their similarity functions and on an alternative case representation). Finally, we describe several ways to incorporate additional domain-specific knowledge when applying case-based classifiers to similar tasks. ",
+ "neighbors": [
+ 148,
+ 239
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1225,
+ "label": 0,
+ "text": "Title: Goal-Driven Learning: Fundamental Issues (A Symposium Report) \nAbstract: In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Appears in AI Magazine, 14(4):67-72, 1993",
+ "neighbors": [
+ 639,
+ 1238,
+ 1270
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1226,
+ "label": 2,
+ "text": "Title: Evaluating Neural Network Predictors by Bootstrapping \nAbstract: We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between training, cross-validation, and test sets) is significantly larger than the variation due to different network conditions (such as architecture and initial weights). Furthermore, this method allows us to forecast a probability distribution, as opposed to the traditional case of just a single value at each time step. We demonstrate this on a strictly held-out test set that includes the 1987 stock market crash. We also compare the performance of the class of neural networks to identically bootstrapped linear models.",
+ "neighbors": [
+ 534,
+ 768,
+ 1227,
+ 1241,
+ 1242
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1227,
+ "label": 2,
+ "text": "Title: Predictions with Confidence Intervals (Local Error Bars) \nAbstract: We present a new method for obtaining local error bars, i.e., estimates of the confidence in the predicted value that depend on the input. We approach this problem of nonlinear regression in a maximum likelihood framework. We demonstrate our technique first on computer generated data with locally varying, normally distributed target noise. We then apply it to the laser data from the Santa Fe Time Series Competition. Finally, we extend the technique to estimate error bars for iterated predictions, and apply it to the exact competition task where it gives the best performance to date.",
+ "neighbors": [
+ 534,
+ 768,
+ 1226,
+ 1284
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1228,
+ "label": 3,
+ "text": "Title: Adaptive proposal distribution for random walk Metropolis algorithm \nAbstract: The choice of a suitable MCMC method and further the choice of a proposal distribution is known to be crucial for the convergence of the Markov chain. However, in many cases the choice of an effective proposal distribution is difficult. As a remedy we suggest a method called Adaptive Proposal (AP). Although the stationary distribution of the AP algorithm is slightly biased, it appears to provide an efficient tool for, e.g., reasonably low dimensional problems, as typically encountered in non-linear regression problems in natural sciences. As a realistic example we include a successful application of the AP algorithm in parameter estimation for the satellite instrument 'GO-MOS'. In this paper we also present a comprehensive test procedure and systematic performance criteria for comparing Adaptive Proposal algorithm with more traditional Metropolis algorithms. ",
+ "neighbors": [
+ 266,
+ 281,
+ 1080
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1229,
+ "label": 2,
+ "text": "Title: Priors, Stabilizers and Basis Functions: from regularization to radial, tensor and additive splines \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular we had discussed how standard smoothness functionals lead to a subclass of regularization networks, the well-known Radial Basis Functions approximation schemes. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions and some forms of Projection Pursuit Regression. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we show the relation between activation functions of the Gaussian and sigmoidal type by considering the simple case of the kernel G(x) = jxj. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that generalize into Hyper Basis Functions, b) some tensor product splines, and c) additive splines that generalize into schemes of the type of ridge approximation, hinge functions and one-hidden-layer perceptrons. This paper describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041 (which includes funds from DARPA provided under the HPCC program); and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at the Whitaker College, Massachusetts Institute of Technology. c fl Massachusetts Institute of Technology, 1993",
+ "neighbors": [
+ 357,
+ 929
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1230,
+ "label": 3,
+ "text": "Title: Massively Parallel Case-Based Reasoning with Probabilistic Similarity Metrics \nAbstract: We propose a probabilistic case-space metric for the case matching and case adaptation tasks. Central to our approach is a probability propagation algorithm adopted from Bayesian reasoning systems, which allows our case-based reasoning system to perform theoretically sound probabilistic reasoning. The same probability propagation mechanism actually offers a uniform solution to both the case matching and case adaptation problems. We also show how the algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system. We argue that using this kind of an approach, the difficult problem of case indexing can be completely avoided. Pp. 144-154 in Topics in Case-Based Reasoning, edited by Stefan Wess, Klaus-Dieter Althoff and Michael M. Richter. Volume 837, Lecture ",
+ "neighbors": [
+ 166,
+ 1193
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1231,
+ "label": 2,
+ "text": "Title: Evolving Artificial Neural Networks using the Baldwin Effect \nAbstract: This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both help ing and improving on simple genetic search.",
+ "neighbors": [
+ 399,
+ 898,
+ 1338
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1232,
+ "label": 2,
+ "text": "Title: Optimising Local Hebbian Learning: use the ffi-rule \nAbstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1233,
+ "label": 3,
+ "text": "Title: On Computing the Largest Fraction of Missing Information for the EM Algorithm and the Worst\nAbstract: We address the problem of computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation. These are the largest eigenvalue and its associated eigenvector for the Jacobian of the EM operator at a maximum likelihood estimate, which are important for assessing convergence in iterative simulation. An estimate of the largest fraction of missing information is available from the EM iterates; this is often adequate since only a few figures of accuracy are needed. In some instances the EM iteration also gives an estimate of the worst linear function. We show that the power method for eigencomputation can be used to compute efficient and accurate estimates of both quantities. Unlike eigenvalue decomposition, the power method computes only the largest eigenvalue and eigenvector of a matrix, it can take advantage of a good eigenvector estimate as an initial value and it can be terminated after only a few figures of accuracy are obtained. Moreover, the matrix products needed in the power method can be computed by extrapolation, obviating the need to form the Jacobian of the EM operator. We give results of simultation studies on multivariate normal data showing this approach becomes more efficient as the data dimension increases than methods that use a finite-difference approximation to the Jacobian, which is the only general-purpose alternative available. fl Funded by National Institutes of Health Small Business Innovation Reseach Grant 5R44CA65147-03, and by Office of Naval Research contracts N00014-96-1-0192 and N00014-96-1-0330. We are indebted to Tim Hesterberg, Jim Schimert, Doug Clarkson, Anne Greenbaum, and Adrian Raftery for comments and discussion that helped advance this research and improve this paper. ",
+ "neighbors": [
+ 1244
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1234,
+ "label": 2,
+ "text": "Title: A HIERARCHICAL COMMUNITY OF EXPERTS \nAbstract: We describe a directed acyclic graphical model that contains a hierarchy of linear units and a mechanism for dynamically selecting an appropriate subset of these units to model each observation. The non-linear selection mechanism is a hierarchy of binary units each of which gates the output of one of the linear units. There are no connections from linear units to binary units, so the generative model can be viewed as a logistic belief net (Neal 1992) which selects a skeleton linear model from among the available linear units. We show that Gibbs sampling can be used to learn the parameters of the linear and binary units even when the sampling is so brief that the Markov chain is far from equilibrium. ",
+ "neighbors": [
+ 19,
+ 40,
+ 42,
+ 1166
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1235,
+ "label": 2,
+ "text": "Title: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks (CN) \nAbstract: The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost. ",
+ "neighbors": [
+ 283,
+ 386,
+ 991,
+ 1029,
+ 1051,
+ 1083
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1236,
+ "label": 1,
+ "text": "Title: Properties of Genetic Representations of Neural Architectures \nAbstract: Genetic algorithms and related evolutionary techniques offer a promising approach for automatically exploring the design space of neural architectures for artificial intelligence and cognitive modeling. Central to this process of evolutionary design of neural architectures (EDNA) is the choice of the representation scheme that is used to encode a neural architecture in the form of a gene string (genotype) and to decode a genotype into the corresponding neural architecture (phenotype). The representation scheme used not only constrains the class of neural architectures that are representable (evolvable) in the system, but also determines the efficiency and the time-space complexity of the evolutionary design procedure as a whole. This paper identifies and discusses a set of properties that can be used to characterize different representations used in EDNA and to design or select representations with the necessary properties for particular classes of applications.",
+ "neighbors": [
+ 91,
+ 288,
+ 885,
+ 1051,
+ 1295
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1237,
+ "label": 2,
+ "text": "Title: CuPit-2 A Parallel Language for Neural Algorithms: Language Reference and Tutorial \nAbstract: and load balancing even for irregular neural networks. The idea to achieve these goals lies in the programming model: CuPit-2 programs are object-centered, with connections and nodes of a graph (which is the neural network) being the objects. Algorithms are based on parallel local computations in the nodes and connections and communication along the connections (plus broadcast and reduction operations). This report describes the design considerations and the resulting language definition and discusses in detail a tutorial example program. This CuPit-2 language manual and tutorial is an updated version of the original CuPit language manual [Pre94]. The new language CuPit-2 differs from the original CuPit in several ways. All language changes from CuPit to CuPit-2 are listed in the appendix. ",
+ "neighbors": [
+ 1155,
+ 1239
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1238,
+ "label": 0,
+ "text": "Title: Issues in Goal-Driven Explanation \nAbstract: When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation. ",
+ "neighbors": [
+ 340,
+ 834,
+ 1225
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1239,
+ "label": 2,
+ "text": "Title: A Parallel Programming Model for Irregular Dynamic Neural Networks a programming model that allows to\nAbstract: A compiler for CuPit has been built for the MasPar MP-1/MP-2 using compilation techniques that can also be applied to most other parallel machines. The paper shortly presents the main ideas of the techniques used and results obtained by the various optimizations. ",
+ "neighbors": [
+ 510,
+ 635,
+ 1155,
+ 1237
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1240,
+ "label": 2,
+ "text": "Title: Subsymbolic Case-Role Analysis of Sentences with Embedded Clauses \nAbstract: A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. ",
+ "neighbors": [
+ 114,
+ 1091
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1241,
+ "label": 2,
+ "text": "Title: On-Line Adaptation of a Signal Predistorter through Dual Reinforcement Learning \nAbstract: Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation.",
+ "neighbors": [
+ 388,
+ 768,
+ 951,
+ 1170,
+ 1226
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1242,
+ "label": 2,
+ "text": "Title: On-Line Adaptation of a Signal Predistorter through Dual Reinforcement Learning \nAbstract: Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation.",
+ "neighbors": [
+ 388,
+ 768,
+ 951,
+ 1170,
+ 1226
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1243,
+ "label": 3,
+ "text": "Title: Adaptive probabilistic networks \nAbstract: Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the field of artificial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-defined way. ",
+ "neighbors": [
+ 711
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1244,
+ "label": 3,
+ "text": "Title: On Convergence of the EM Algorithm and the Gibbs Sampler SUMMARY \nAbstract: In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference which uses proper prior distributions. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models. ",
+ "neighbors": [
+ 40,
+ 154,
+ 1013,
+ 1233,
+ 1307
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1245,
+ "label": 6,
+ "text": "Title: Error-Correcting Output Codes: A General Method for Improving Multiclass Inductive Learning Programs \nAbstract: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k \"classes\"). The definition is acquired by studying large collections of training examples of the form hx i ; f(x i )i. Existing approaches to this problem include (a) direct application of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and (c) application of binary concept learning algorithms with distributed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of backpropagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. ",
+ "neighbors": [
+ 317,
+ 496,
+ 510,
+ 656,
+ 671,
+ 894,
+ 917,
+ 956
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1246,
+ "label": 3,
+ "text": "Title: Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks \nAbstract: We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled.",
+ "neighbors": [
+ 34,
+ 238,
+ 458,
+ 1209
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1247,
+ "label": 5,
+ "text": "Title: Inductive Constraint Logic \nAbstract: A novel approach to learning first order logic formulae from positive and negative examples is presented. Whereas present inductive logic programming systems employ examples as true and false ground facts (or clauses), we view examples as interpretations which are true or false for the target theory. This viewpoint allows to reconcile the inductive logic programming paradigm with classical attribute value learning in the sense that the latter is a special case of the former. Because of this property, we are able to adapt AQ and CN2 type algorithms in order to enable learning of full first order formulae. However, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form, we will use a clausal representation, which corresponds to a conjuctive normal form where each conjunct forms a constraint on positive examples. This representation duality reverses also the role of positive and negative examples, both in the heuristics and in the algorithm. The resulting theory is incorporated in a system named ICL (Inductive Constraint Logic).",
+ "neighbors": [
+ 198,
+ 374,
+ 573,
+ 938,
+ 1037,
+ 1120,
+ 1251
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1248,
+ "label": 3,
+ "text": "Title: Bumptrees for Efficient Function, Constraint, and Classification Learning \nAbstract: A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. ",
+ "neighbors": [
+ 23,
+ 66,
+ 125,
+ 1078
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1249,
+ "label": 1,
+ "text": "Title: Automatic Definition of Modular Neural Networks \nAbstract: A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. ",
+ "neighbors": [
+ 646,
+ 1187,
+ 1325,
+ 1338
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1250,
+ "label": 4,
+ "text": "Title: Category: Control, Navigation and Planning Preference: Oral presentation Exploiting Model Uncertainty Estimates for Safe Dynamic\nAbstract: Model learning combined with dynamic programming has been shown to be effective for learning control of continuous state dynamic systems. The simplest method assumes the learned model is correct and applies dynamic programming to it, but many approximators provide uncertainty estimates on the fit. How can they be exploited? This paper addresses the case where the system must be prevented from having catastrophic failures during learning. We propose a new algorithm adapted from the dual control literature and use Bayesian locally weighted regression models with stochastic dynamic programming. A common reinforcement learning assumption is that aggressive exploration should be encouraged. This paper addresses the converse case in which the system has to reign in exploration. The algorithm is illustrated on a 4 dimensional simulated control problem.",
+ "neighbors": [
+ 169
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1251,
+ "label": 5,
+ "text": "Title: Multi-class problems and discretization in ICL Extended abstract \nAbstract: Handling multi-class problems and real numbers is important in practical applications of machine learning to KDD problems. While attribute-value learners address these problems as a rule, very few ILP systems do so. The few ILP systems that handle real numbers mostly do so by trying out all real values that are applicable, thus running into efficiency or overfitting problems. This paper discusses some recent extensions of ICL that address these problems. ICL, which stands for Inductive Constraint Logic, is an ILP system that learns first order logic formulae from positive and negative examples. The main charateristic of ICL is its view on examples. These are seen as interpretations which are true or false for the clausal target theory (in CNF). We first argue that ICL can be used for learning a theory in a disjunctive normal form (DNF). With this in mind, a possible solution for handling more than two classes is given (based on some ideas from CN2). Finally, we show how to tackle problems with continuous values by adapting discretization techniques from attribute value learners. ",
+ "neighbors": [
+ 239,
+ 1037,
+ 1247,
+ 1308
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1252,
+ "label": 4,
+ "text": "Title: Learning the Peg-into-Hole Assembly Operation with a Connectionist Reinforcement Technique \nAbstract: The paper presents a learning controller that is capable of increasing insertion speed during consecutive peg-into-hole operations, without increasing the contact force level. Our aim is to find a better relationship between measured forces and the controlled velocity, without using a complicated (human generated) model. We followed a connectionist approach. Two learning phases are distinguished. First the learning controller is trained (or initialised) in a supervised way by a suboptimal task frame controller. Then a reinforcement learning phase follows. The controller consists of two networks: (1) the policy network and (2) the exploration network. On-line robotic exploration plays a crucial role in obtaining a better policy. Optionally, this architecture can be extended with a third network: the reinforcement network. The learning controller is implemented on a CAD-based contact force simulator. In contrast with most other related work, the experiments are simulated in 3D with 6 degrees of freedom. Performance of a peg-into-hole task is measured in insertion time and average/maximum force level. The fact that a better performance can be obtained in this way, demonstrates the importance of model-free learning techniques for repetitive robotic assembly tasks. The paper presents the approach and simulation results. Keywords: robotic assembly, peg-into-hole, artificial neural networks, reinforcement learning.",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1253,
+ "label": 4,
+ "text": "Title: Using Temporal-Difference Reinforcement Learning to Improve Decision-Theoretic Utilities for Diagnosis \nAbstract: Probability theory represents and manipulates uncertainties, but cannot tell us how to behave. For that we need utility theory which assigns values to the usefulness of different states, and decision theory which concerns optimal rational decisions. There are many methods for probability modeling, but few for learning utility and decision models. We use reinforcement learning to find the optimal sequence of questions in a diagnosis situation while maintaining a high accuracy. Automated diagnosis on a heart-disease domain is used to demonstrate that temporal-difference learning can improve diagnosis. On the Cleveland heart-disease database our results are better than those reported from all previous methods. ",
+ "neighbors": [
+ 300,
+ 327,
+ 538
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1254,
+ "label": 1,
+ "text": "Title: Automatic Design of Cellular Neural Networks by means of Genetic Algorithms: Finding a Feature Detector\nAbstract: This paper aims to examine the use of genetic algorithms to optimize subsystems of cellular neural network architectures. The application at hand is character recognition: the aim is to evolve an optimal feature detector in order to aid a conventional classifier network to generalize across different fonts. To this end, a performance function and a genetic encoding for a feature detector are presented. An experiment is described where an optimal feature detector is indeed found by the genetic algorithm. We are interested in the application of cellular neural networks in computer vision. Genetic algorithms (GA's) [1-3] can serve to optimize the design of cellular neural networks. Although the design of the global architecture of the system could still be done by human insight, we propose that specific sub-modules of the system are best optimized using one or other optimization method. GAs are a good candidate to fulfill this optimization role, as they are well suited to problems where the objective function is a complex function of many parameters. The specific problem we want to investigate is one of character recognition. More specifically, we would like to use the GA to find optimal feature detectors to be used in the recognition of digits . ",
+ "neighbors": [
+ 70,
+ 91
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1255,
+ "label": 4,
+ "text": "Title: Packet Routing and Reinforcement Learning: Estimating Shortest Paths in Dynamic Graphs \nAbstract: This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. ",
+ "neighbors": [
+ 240
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1256,
+ "label": 6,
+ "text": "Title: Learning From a Population of Hypotheses \nAbstract: We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents. ",
+ "neighbors": [
+ 255,
+ 257,
+ 1215
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1257,
+ "label": 3,
+ "text": "Title: Markov Chain Monte Carlo in Practice: A Roundtable Discussion \nAbstract: Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various \"tricks of the trade\". This paper is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding convergence, assessing standard errors, identification of models for which good MCMC algorithms exist, and the current state of software development. ",
+ "neighbors": [
+ 21,
+ 25,
+ 589
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1258,
+ "label": 2,
+ "text": "Title: In Proceedings of the 1997 Sian Kaan International Workshop on Neural Networks and Neurocontrol. Real-Valued\nAbstract: 2 Neural Network & Machine Learning Laboratory Computer Science Department Brigham Young University Provo, UT 84602, USA Email: martinez@cs.byu.edu WWW: http://axon.cs.byu.edu Abstract. Many neural network models must be trained by finding a set of real-valued weights that yield high accuracy on a training set. Other learning models require weights on input attributes that yield high leave-one-out classification accuracy in order to avoid problems associated with irrelevant attributes and high dimensionality. In addition, there are a variety of general problems for which a set of real values must be found which maximize some evaluation function. This paper presents an algorithm for doing a schemata search over a real-valued weight space to find a set of weights (or other real values) that yield high values for a given evaluation function. The algorithm, called the Real-Valued Schemata Search (RVSS), uses the BRACE statistical technique [Moore & Lee, 1993] to determine when to narrow the search space. This paper details the RVSS approach and gives initial empirical results. ",
+ "neighbors": [
+ 401,
+ 1064
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1259,
+ "label": 3,
+ "text": "Title: ESTIMATING THE SQUARE ROOT OF A DENSITY VIA COMPACTLY SUPPORTED WAVELETS \nAbstract: A large body of nonparametric statistical literature is devoted to density estimation. Overviews are given in Silverman (1986) and Izenman (1991). This paper addresses the problem of univariate density estimation in a novel way. Our approach falls in the class of so called projection estimators, introduced by Cencov (1962). The orthonor-mal basis used is a basis of compactly supported wavelets from Daubechies' family. Kerkyacharian and Picard (1992, 1993), Donoho et al. (1996), and Delyon and Judit-sky (1993), among others, applied wavelets in density estimation. The local nature of wavelet functions makes the wavelet estimator superior to projection estimators that use classical orthonormal bases (Fourier, Hermite, etc.) Instead of estimating the unknown density directly, we estimate the square root of the density, which enables us to control the positiveness and the L 1 norm of the density estimate. However, in that approach one needs a pre-estimator of the density to calculate sample wavelet coefficients. We describe VISUSTOP, a data-driven procedure for determining the maximum number of levels in the wavelet density estimator. Coefficients in the selected levels are thresholded to make the estimator parsimonious. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1260,
+ "label": 2,
+ "text": "Title: Control of Selective Visual Attention: Modeling the \"Where\" Pathway \nAbstract: Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing. Usually, this selection is implemented in the form of a spatially circumscribed region of the visual field, the so-called \"focus of attention\" which scans the visual scene dependent on the input and on the attentional state of the subject. We here present a model for the control of the focus of attention in primates, based on a saliency map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.",
+ "neighbors": [
+ 319
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1261,
+ "label": 3,
+ "text": "Title: A guide to the literature on learning probabilistic networks from data \nAbstract: This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. ",
+ "neighbors": [
+ 1272
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1262,
+ "label": 3,
+ "text": "Title: Building Classifiers using Bayesian Networks \nAbstract: Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods. ",
+ "neighbors": [
+ 227,
+ 1067
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1263,
+ "label": 3,
+ "text": "Title: On the Hardness of Approximate Reasoning \nAbstract: Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to model-counting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the existence of linear time satisfiability algorithms. What is even more surprising is that, as we show, even approximating the number of satisfying assignments (i.e., \"approximating\" approximate reasoning), is intractable for most of these restricted theories. We also identify some restricted classes of propositional formulae for which efficient algorithms for counting satisfying assignments can be given. fl Preliminary version of this paper appeared in the Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI93. y Supported by NSF grants CCR-89-02500 and CCR-92-00884 and by DARPA AFOSR-F4962-92-J-0466. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1264,
+ "label": 2,
+ "text": "Title: Extended Kalman filter in recurrent neural network training and pruning \nAbstract: Recently, extended Kalman filter (EKF) based training has been demonstrated to be effective in neural network training. However, its conjunction with pruning methods such as weight decay and optimal brain damage (OBD) has not yet been studied. In this paper, we will elucidate the method of EKF training and propose a pruning method which is based on the results obtained by EKF training. These combined training pruning method is applied to a time series prediction problem.",
+ "neighbors": [
+ 980
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1265,
+ "label": 1,
+ "text": "Title: Induction and Recapitulation of Deep Musical Structure \nAbstract: We describe recent extensions to our framework for the automatic generation of music-making programs. We have previously used genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. In this paper we describe new work on the use of connectionist techniques to automatically induce musical structure from a corpus. We show how the resulting neural networks can be used as critics that drive our genetic programming system. We argue that this framework can potentially support the induction and recapitulation of deep structural features of music. We present some initial results produced using neural and hybrid symbolic/neural critics, and we discuss directions for future work.",
+ "neighbors": [
+ 691,
+ 717
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1266,
+ "label": 6,
+ "text": "Title: Why Experimentation can be better than \"Perfect Guidance\" \nAbstract: Many problems correspond to the classical control task of determining the appropriate control action to take, given some (sequence of) observations. One standard approach to learning these control rules, called behavior cloning, involves watching a perfect operator operate a plant, and then trying to emulate its behavior. In the experimental learning approach, by contrast, the learner first guesses an initial operation-to-action policy and tries it out. If this policy performs sub-optimally, the learner can modify it to produce a new policy, and recur. This paper discusses the relative effectiveness of these two approaches, especially in the presence of perceptual aliasing, showing in particular that the experimental learner can often learn more effectively than the cloning one. ",
+ "neighbors": [
+ 169
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1267,
+ "label": 4,
+ "text": "Title: Planning by Incremental Dynamic Programming \nAbstract: This paper presents the basic results and ideas of dynamic programming as they relate most directly to the concerns of planning in AI. These form the theoretical basis for the incremental planning methods used in the integrated architecture Dyna. These incremental planning methods are based on continually updating an evaluation function and the situation-action mapping of a reactive system. Actions are generated by the reactive system and thus involve minimal delay, while the incremental planning process guarantees that the actions and evaluation function will eventually be optimal|no matter how extensive a search is required. These methods are well suited to stochastic tasks and to tasks in which a complete and accurate model is not available. For tasks too large to implement the situation-action mapping as a table, supervised-learning methods must be used, and their capabilities remain a significant limitation of the approach.",
+ "neighbors": [
+ 300,
+ 327,
+ 328,
+ 1269
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1268,
+ "label": 0,
+ "text": "Title: CBR for Document Retrieval: The FAllQ Project \nAbstract: This paper reports about a project on document retrieval in an industrial setting. The objective is to provide a tool that helps finding documents related to a given query, such as answers in Frequently Asked Questions databases. A CBR approach has been used to develop a running prototypical system which is currently under practical evaluation. ",
+ "neighbors": [
+ 1004,
+ 1005,
+ 1117
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1269,
+ "label": 4,
+ "text": "Title: Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions \nAbstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a tight bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning. One significant application of these results is to problems where a function approximator is used to learn a value function, with training of the approxi-mator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. ",
+ "neighbors": [
+ 95,
+ 327,
+ 328,
+ 334,
+ 434,
+ 776,
+ 994,
+ 1267
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1270,
+ "label": 0,
+ "text": "Title: BECOMING AN EXPERT CASE-BASED REASONER: LEARNING TO ADAPT PRIOR CASES \nAbstract: Experience plays an important role in the development of human expertise. One computational model of how experience affects expertise is provided by research on case-based reasoning, which examines how stored cases encapsulating traces of specific prior problem-solving episodes can be retrieved and re-applied to facilitate new problem-solving. Much progress has been made in methods for accessing relevant cases, and case-based reasoning is receiving wide acceptance both as a technology for developing intelligent systems and as a cognitive model of a human reasoning process. However, one important aspect of case-based reasoning remains poorly understood: the process by which retrieved cases are adapted to fit new situations. The difficulty of encoding effective adaptation rules by hand is widely recognized as a serious impediment to the development of fully autonomous case-based reasoning systems. Consequently, an important question is how case-based reasoning systems might learn to improve their expertise at case adaptation. We present a framework for acquiring this expertise by using a combination of general adaptation rules, introspective reasoning, and case-based reasoning about the case adaptation task itself. ",
+ "neighbors": [
+ 376,
+ 639,
+ 1225
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1271,
+ "label": 2,
+ "text": "Title: Prosopagnosia in Modular Neural Network Models \nAbstract: There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this chapter, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to 1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, 2) the developing infant's need to perform subordinate classification (identification) of faces early on, and 3) the infant's low visual acuity at birth. Inspired by de Schonen and Mancini's (1998) arguments that factors like these could bias the visual system to develop a specialized face processor, and Jacobs and Kosslyn's (1994) experiments in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules are simple unbiased classifiers, the competition is sufficient to achieve enough of a specialization that damaging one module impairs the model's face recognition more than its object recognition, and damaging the other module impairs the model's object recognition more than its face recognition. In Model II, however, we bias the modules by providing one with low spatial frequency information and the other with high spatial frequency information. In this case, when the model's task is subordinate classification of faces and superordinate classification of objects, the low spatial frequency network shows an even stronger specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that something resembling a face processing \"module\" could arise as a natural consequence of the infant's developmental environment without being innately specified. ",
+ "neighbors": [
+ 1065
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1272,
+ "label": 3,
+ "text": "Title: Robust Parameter Learning in Bayesian Networks with Missing Data \nAbstract: There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this chapter, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to 1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, 2) the developing infant's need to perform subordinate classification (identification) of faces early on, and 3) the infant's low visual acuity at birth. Inspired by de Schonen and Mancini's (1998) arguments that factors like these could bias the visual system to develop a specialized face processor, and Jacobs and Kosslyn's (1994) experiments in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules are simple unbiased classifiers, the competition is sufficient to achieve enough of a specialization that damaging one module impairs the model's face recognition more than its object recognition, and damaging the other module impairs the model's object recognition more than its face recognition. In Model II, however, we bias the modules by providing one with low spatial frequency information and the other with high spatial frequency information. In this case, when the model's task is subordinate classification of faces and superordinate classification of objects, the low spatial frequency network shows an even stronger specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that something resembling a face processing \"module\" could arise as a natural consequence of the infant's developmental environment without being innately specified. ",
+ "neighbors": [
+ 336,
+ 1261
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1273,
+ "label": 2,
+ "text": "Title: Gene Structure Prediction by Linguistic Methods \nAbstract: The higher-order structure of genes and other features of biological sequences can be described by means of formal grammars. These grammars can then be used by general-purpose parsers to detect and assemble such structures by means of syntactic pattern recognition. We describe a grammar and parser for eukaryotic protein-encoding genes, which by some measures is as effective as current connectionist and combinatorial algorithms in predicting gene structures for sequence database entries. Parameters on the grammar rules are optimized for several different species, and mixing experiments performed to determine the degree of species specificity and the relative importance of compositional, signal-based, and syntactic components in gene prediction.",
+ "neighbors": [
+ 358,
+ 1113,
+ 1299
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1274,
+ "label": 2,
+ "text": "Title: Learning a Specialization for Face Recognition: The Effect of Spatial Frequency \nAbstract: The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent mechanisms in the brain. Such a dissociation could be the result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Studies of normal adult performance on face and object recognition tasks seem to indicate that face recognition is primarily configural, involving the low spatial frequency information present in a stimulus over relatively large distances, whereas object recognition is primarily featural, involving analysis of the object's parts using local, high spatial frequency information. In a feed-forward computational model of visual processing, two modules compete to classify input stimuli; when one module receives low spatial frequency information and the other receives high spatial frequency information, the low-frequency module shows a strong specialization for face recognition in a combined face identification/object classification task. The series of experiments shows that the fine discrimination necessary for distinguishing members of a visually homoge neous class such as faces relies heavily on the low spatial frequencies present in a stimulus.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1275,
+ "label": 2,
+ "text": "Title: Combining Exploratory Projection Pursuit And Projection Pursuit Regression With Application To Neural Networks \nAbstract: We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new family of cost/complexity penalty terms. Some improved generalization properties are demonstrated on real world problems.",
+ "neighbors": [
+ 207,
+ 1202,
+ 1276,
+ 1277
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1276,
+ "label": 2,
+ "text": "Title: Objective Function Formulation of the BCM Theory of Visual Cortical Plasticity: Statistical Connections, Stability Conditions \nAbstract: In this paper, we present an objective function formulation of the BCM theory of visual cortical plasticity that permits us to demonstrate the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit. This formulation provides a general method for stability analysis of the fixed points of the theory and enables us to analyze the behavior and the evolution of the network under various visual rearing conditions. It also allows comparison with many existing unsupervised methods. This model has been shown successful in various applications such as phoneme and 3D object recognition. We thus have the striking and possibly highly significant result that a biological neuron is performing a sophisticated statistical procedure. ",
+ "neighbors": [
+ 207,
+ 1202,
+ 1217,
+ 1275,
+ 1277,
+ 1279
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1277,
+ "label": 2,
+ "text": "Title: Face Recognition using a Hybrid Supervised/Unsupervised Neural Network \nAbstract: A system for automatic face recognition is presented. It consists of several steps; Automatic detection of the eyes and mouth is followed by a spatial normalization of the images. The classification of the normalized images is carried out by a hybrid (supervised and unsupervised) Neural Network. Two methods for reducing the overfitting a common problem in high dimensional classification schemes are presented, and the superiority of their combination is demonstrated. ",
+ "neighbors": [
+ 1202,
+ 1275,
+ 1276
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1278,
+ "label": 2,
+ "text": "Title: `Balancing' of conductances may explain irregular cortical spiking. \nAbstract: Five related factors are identified which enable single compartment Hodgkin-Huxley model neurons to convert random synaptic input into irregular spike trains similar to those seen in in vivo cortical recordings. We suggest that cortical neurons may operate in a narrow parameter regime where synaptic and intrinsic conductances are balanced to re flect, through spike timing, detailed correlations in the inputs. fl Please send comments to tony@salk.edu. The reference for this paper is: Technical Report no. INC-9502, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. ",
+ "neighbors": [
+ 1218
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1279,
+ "label": 2,
+ "text": "Title: Three-Dimensional Object Recognition Using an Unsupervised BCM Network: The Usefulness of Distinguishing Features \nAbstract: We propose an object recognition scheme based on a method for feature extraction from gray level images that corresponds to recent statistical theory, called projection pursuit, and is derived from a biologically motivated feature extracting neuron. To evaluate the performance of this method we use a set of very detailed psychophysical 3D object recognition experiments (Bulthoff and Edelman, 1992). ",
+ "neighbors": [
+ 207,
+ 357,
+ 1276
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1280,
+ "label": 6,
+ "text": "Title: WRAPPERS FOR PERFORMANCE ENHANCEMENT AND OBLIVIOUS DECISION GRAPHS \nAbstract: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem.",
+ "neighbors": [
+ 188,
+ 369,
+ 1302
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1281,
+ "label": 6,
+ "text": "Title: Applying Winnow to Context-Sensitive Spelling Correction \nAbstract: Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classifiers. We find that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. ",
+ "neighbors": [
+ 297,
+ 1056,
+ 1321
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1282,
+ "label": 3,
+ "text": "Title: Geometric Ergodicity and Hybrid Markov Chains \nAbstract: Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the so-called hybrid chains. We prove that under certain conditions, a hybrid chain will \"inherit\" the geometric ergodicity of its constituent parts. Acknowledgements. We thank Charlie Geyer for a number of very useful comments regarding spectral theory and central limit theorems. We thank Alison Gibbs, Phil Reiss, Peter Rosenthal, and Richard Tweedie for very helpful discussions. We thank the referee and the editor for many excellent suggestions. ",
+ "neighbors": [
+ 235,
+ 947,
+ 1063
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1283,
+ "label": 1,
+ "text": "Title: A Methodology for Strategy Optimization Under Uncertainty in the Extended Two-Dimensional Pursuer/Evader Problem \nAbstract: ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1284,
+ "label": 2,
+ "text": "Title: Avoiding overfitting by locally matching the noise level of the data gating network discovers the\nAbstract: When trying to forecast the future behavior of a real-world system, two of the key problems are nonstationarity of the process (e.g., regime switching) and overfitting of the model (particularly serious for noisy processes). This articles shows how gated experts can point to solutions to these problems. The architecture, also called society of experts and mixture of experts consists of a (nonlinear) gating network and several (nonlinear) competing experts. Each expert learns a conditional mean (as usual), but each expert also has its own adaptive width. The gating network learns to assign a probability to each expert that depends on the input. This article first discusses the assumptions underlying this architecture and derives the weight update rules. It then evaluates the performance of gated experts in comparison to that of single networks, as well as to networks with two outputs, one predicting the mean, the other one the local error bar. This article also investigates the ability of gated experts to discover and characterize underlying the regimes. The results are: * there is significantly less overfitting compared to single nets, for two reasons: only subsets of the potential inputs are given to the experts and gating network (less of a curse of dimensionality), and the experts learn to match their variances to the (local) noise levels, thus only learning as This article focuses on the architecture and the overfitting problem. Applications to a computer-generated toy problem and the laser data from Santa Fe Competition are given in [Mangeas and Weigend, 1995], and the application to the real-world problem of predicting the electricity demand of France are given in [Mangeas et al., 1995]. much as the data support.",
+ "neighbors": [
+ 40,
+ 180,
+ 768,
+ 1170,
+ 1227
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1285,
+ "label": 5,
+ "text": "Title: 248 Efficient Superscalar Performance Through Boosting \nAbstract: The foremost goal of superscalar processor design is to increase performance through the exploitation of instruction-level parallelism (ILP). Previous studies have shown that speculative execution is required for high instruction per cycle (IPC) rates in non-numerical applications. The general trend has been toward supporting speculative execution in complicated, dynamically-scheduled processors. Performance, though, is more than just a high IPC rate; it also depends upon instruction count and cycle time. Boosting is an architectural technique that supports general speculative execution in simpler, statically-scheduled processors. Boosting labels speculative instructions with their control dependence information. This labelling eliminates control dependence constraints on instruction scheduling while still providing full dependence information to the hardware. We have incorporated boosting into a trace-based, global scheduling algorithm that exploits ILP without adversely affecting the instruction count of a program. We use this algorithm and estimates of the boosting hardware involved to evaluate how much speculative execution support is really necessary to achieve good performance. We find that a statically-scheduled superscalar processor using a minimal implementation of boosting can easily reach the performance of a much more complex dynamically-scheduled superscalar processor. ",
+ "neighbors": [
+ 423
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1286,
+ "label": 6,
+ "text": "Title: The Minimum Feature Set Problem \nAbstract: This paper appeared in Neural Networks 7 (1994), no. 3, pp. 491-494. ",
+ "neighbors": [
+ 1006
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1287,
+ "label": 3,
+ "text": "Title: Utility Elicitation as a Classification Problem \nAbstract: We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the user's utility function solely on elicitation of qualitative preferences. The second makes assumptions about the form and decomposability of the utility function. Here we take a different approach: we attempt to identify the new user's utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility functions that minimize an appropriate distance measure. Having identified the clusters, we develop a classification scheme that requires many fewer and simpler assessments than full utility elicitation and is more robust than utility elicitation based solely on preferences. We have tested our algorithm on a small database of utility functions in a prenatal diagnosis domain and the results are quite promising. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1288,
+ "label": 3,
+ "text": "Title: Ensemble Learning for Hidden Markov Models \nAbstract: The standard method for training Hidden Markov Models optimizes a point estimate of the model parameters. This estimate, which can be viewed as the maximum of a posterior probability density over the model parameters, may be susceptible to over-fitting, and contains no indication of parameter uncertainty. Also, this maximum may be unrepresentative of the posterior probability distribution. In this paper we study a method in which we optimize an ensemble which approximates the entire posterior probability distribution. The ensemble learning algorithm requires the same resources as the traditional Baum-Welch algorithm. The traditional training algorithm for hidden Markov models is an expectation-maximization (EM) algorithm (Dempster et al. 1977) known as the Baum-Welch algorithm. It is a maximum likelihood method, or, with a simple modification, a penalized maximum likelihood method, which can be viewed as maximizing a posterior probability density over the model parameters. Recently, Hinton and van Camp (1993) developed a technique known as ensemble learning (see also MacKay (1995) for a review). Whereas maximum a posteriori methods optimize a point estimate of the parameters, in ensemble learning an ensemble is optimized, so that it approximates the entire posterior probability distribution over the parameters. The objective function that is optimized is a variational free energy (Feynman 1972) which measures the relative entropy between the approximating ensemble and the true distribution. In this paper we derive and test an ensemble learning algorithm for hidden Markov models, building on Neal ",
+ "neighbors": [
+ 42,
+ 444
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1289,
+ "label": 2,
+ "text": "Title: An Object-Based Neural Model of Serial Processing in Visual Multielement Tracking \nAbstract: A quantitative model is provided for psychophysical data on the tracking of multiple visual elements (multielement tracking). The model employs an object-based attentional mechanism for constructing and updating object representations. The model selectively enhances neural activations to serially construct and update the internal representations of objects through correlation-based changes in synaptic weights. The correspondence problem between items in memory and elements in the visual input is resolved through a combination of top-down prediction signals and bottom-up grouping processes. Simulations of the model on image sequences used in multielement tracking experiments show that reported results are consistent with a serial tracking mechanism that is based on psychophysical and neurobiological findings. In addition, simulations show that observed effects of perceptual grouping on tracking accuracy may result from the interactions between attention-guided predictions of object location and motion and grouping processes involved in solving the motion correspondence problem. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1290,
+ "label": 3,
+ "text": "Title: Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule Bases \nAbstract: This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods. ",
+ "neighbors": [
+ 72,
+ 88,
+ 823,
+ 974,
+ 1339
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1291,
+ "label": 2,
+ "text": "Title: Volatility of Volatility of Financial Markets \nAbstract: We present empirical evidence for considering volatility of Eurodollar futures as a stochastic process, requiring a generalization of the standard Black-Scholes (BS) model which treats volatility as a constant. We use a previous development of a statistical mechanics of financial markets (SMFM) to model these issues. ",
+ "neighbors": [
+ 972,
+ 983,
+ 1146
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1292,
+ "label": 3,
+ "text": "Title: Plausibility Measures: A User's Guide \nAbstract: We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an as needed basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their algebraic properties, analogues to the use of + and fi in probability theory. An understanding of such properties will be essential if plausibility measures are to be used in practice as a representation tool.",
+ "neighbors": [
+ 160,
+ 196
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1293,
+ "label": 2,
+ "text": "Title: TURING COMPUTABILITY WITH NEURAL NETS \nAbstract: This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. High-order connections are not required.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1294,
+ "label": 3,
+ "text": "Title: \"Linear Dependencies Represented by Chain Graphs,\" \"Graphical Modelling With MIM,\" Manual. \"Identifying Independence in Bayesian\nAbstract: 8] Dori, D. and Tarsi, M., \"A Simple Algorithm to Construct a Consistent Extension of a Partially Oriented Graph,\" Computer Science Department, Tel-Aviv University. Also Technical Report R-185, UCLA, Cognitive Systems Laboratory, October 1992. [14] Pearl, J. and Wermuth, N., \"When Can Association Graphs Admit a Causal Interpretation?,\" UCLA, Cognitive Systems Laboratory, Technical Report R-183-L, November 1992. [17] Verma, T.S. and Pearl, J., \"Deciding Morality of Graphs is NP-complete,\" Technical Report R-188, UCLA, Cognitive Systems Laboratory, October 1992. ",
+ "neighbors": [
+ 152,
+ 488
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1295,
+ "label": 1,
+ "text": "Title: Analysis of Neurocontrollers Designed by Simulated Evolution \nAbstract: Randomized, adaptive, greedy search using evolutionary algorithms offers a powerful and versatile approach to the automated design of neural network architectures for a variety of tasks in artificial intelligence and robotics. In this paper we present results from the evolutionary design of a neuro-controller for a robotic bulldozer. This robot is given the task of clearing an arena littered with boxes by pushing boxes to the sides. Through a careful analysis of the evolved networks we show how evolution exploits the design constraints and properties of the environment to produce network structures of high fitness. We conclude with a brief summary of related ongoing research examining the intricate interplay between environment and evolutionary processes in determining the structure and function of the resulting neural architectures.",
+ "neighbors": [
+ 91,
+ 123,
+ 885,
+ 1161,
+ 1236
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1296,
+ "label": 1,
+ "text": "Title: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs \nAbstract: Randomized, adaptive, greedy search using evolutionary algorithms offers a powerful and versatile approach to the automated design of neural network architectures for a variety of tasks in artificial intelligence and robotics. In this paper we present results from the evolutionary design of a neuro-controller for a robotic bulldozer. This robot is given the task of clearing an arena littered with boxes by pushing boxes to the sides. Through a careful analysis of the evolved networks we show how evolution exploits the design constraints and properties of the environment to produce network structures of high fitness. We conclude with a brief summary of related ongoing research examining the intricate interplay between environment and evolutionary processes in determining the structure and function of the resulting neural architectures.",
+ "neighbors": [
+ 91,
+ 654
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1297,
+ "label": 0,
+ "text": "Title: Abstract \nAbstract: Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA. ",
+ "neighbors": [
+ 167,
+ 340,
+ 416
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1298,
+ "label": 2,
+ "text": "Title: In Fast Non-Linear Dimension Reduction \nAbstract: We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train.",
+ "neighbors": [
+ 274,
+ 387,
+ 1041
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1299,
+ "label": 2,
+ "text": "Title: Non-Deterministic, Constraint-Based Parsing of Human Genes \nAbstract: We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train.",
+ "neighbors": [
+ 156,
+ 358,
+ 1113,
+ 1273
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1300,
+ "label": 2,
+ "text": "Title: Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at 1 \nAbstract: Approximation by scattered shifts f( ff)g ff2A of a basis function are considered, and different methods for localizing these translates are compared. It is argued in the note that the superior localization processes are those that employ the original translates only. ",
+ "neighbors": [
+ 211,
+ 1114
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1301,
+ "label": 3,
+ "text": "Title: The Stationary Wavelet Transform and some Statistical Applications \nAbstract: Wavelets are of wide potential use in statistical contexts. The basics of the discrete wavelet transform are reviewed using a filter notation that is useful subsequently in the paper. A `stationary wavelet transform', where the coefficient sequences are not decimated at each stage, is described. Two different approaches to the construction of an inverse of the stationary wavelet transform are set out. The application of the stationary wavelet transform as an exploratory statistical method is discussed, together with its potential use in nonparametric regression. A method of local spectral density estimation is developed. This involves extensions to the wavelet context of standard time series ideas such as the periodogram and spectrum. The technique is illustrated by its application to data sets from astronomy and veterinary anatomy.",
+ "neighbors": [
+ 1033
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1302,
+ "label": 3,
+ "text": "Title: Targeting Business Users with Decision Table Classifiers \nAbstract: Business users and analysts commonly use spreadsheets and 2D plots to analyze and understand their data. On-line Analytical Processing (OLAP) provides these users with added flexibility in pivoting data around different attributes and drilling up and down the multi-dimensional cube of aggregations. Machine learning researchers, however, have concentrated on hypothesis spaces that are foreign to most users: hyper-planes (Perceptrons), neural networks, Bayesian networks, decision trees, nearest neighbors, etc. In this paper we advocate the use of decision table classifiers that are easy for line-of-business users to understand. We describe several variants of algorithms for learning decision tables, compare their performance, and describe a visualization mechanism that we have implemented in MineSet. The performance of decision tables is comparable to other known algorithms, such as C4.5/C5.0, yet the resulting classifiers use fewer attributes and are more comprehensible. ",
+ "neighbors": [
+ 583,
+ 1148,
+ 1210,
+ 1280
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1303,
+ "label": 6,
+ "text": "Title: The Challenge of Revising an Impure Theory \nAbstract: A pure rule-based program will return a set of answers to each query; and will return the same answer set even if its rules are re-ordered. However, an impure program, which includes the Prolog cut \"!\" and not() operators, can return different answers if the rules are re-ordered. There are also many reasoning systems that return only the first answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose expected accuracy, over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training \"labeled queries\" (each a query coupled with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are seeking only a \"perfect\" theory, and the rule base is propositional. We also prove that this task is not even approximable: Unless P = N P , no polynomial time algorithm can produce an ordering of an n-rule theory whose accuracy is within n fl of optimal, for some fl > 0. We also prove similar hardness, and non-approximatability, results for the related tasks of determining, in these impure contexts, (1) the optimal ordering of the antecedents; (2) the optimal set of rules to add or (3) to delete; and (4) the optimal priority values for a set of defaults. ",
+ "neighbors": [
+ 72,
+ 995,
+ 996
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1304,
+ "label": 2,
+ "text": "Title: Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference \nAbstract: Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. ",
+ "neighbors": [
+ 233,
+ 261,
+ 951,
+ 1146
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1305,
+ "label": 6,
+ "text": "Title: Dynamic Automatic Model Selection \nAbstract: COINS Technical Report 92-30 February 1992 Abstract The problem of how to learn from examples has been studied throughout the history of machine learning, and many successful learning algorithms have been developed. A problem that has received less attention is how to select which algorithm to use for a given learning task. The ability of a chosen algorithm to induce a good generalization depends on how appropriate the model class underlying the algorithm is for the given task. We define an algorithm's model class to be the representation language it uses to express a generalization of the examples. Supervised learning algorithms differ in their underlying model class and in how they search for a good generalization. Given this characterization, it is not surprising that some algorithms find better generalizations for some, but not all tasks. Therefore, in order to find the best generalization for each task, an automated learning system must search for the appropriate model class in addition to searching for the best generalization within the chosen class. This thesis proposal investigates the issues involved in automating the selection of the appropriate model class. The presented approach has two facets. Firstly, the approach combines different model classes in the form of a model combination decision tree, which allows the best representation to be found for each subconcept of the learning task. Secondly, which model class is the most appropriate is determined dynamically using a set of heuristic rules. Explicit in each rule are the conditions in which a particular model class is appropriate and if it is not, what should be done next. In addition to describing the approach, this proposal describes how the approach will be evaluated in order to demonstrate that it is both an efficient and effective method for automatic model selection. ",
+ "neighbors": [
+ 58,
+ 217,
+ 793,
+ 1122
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1306,
+ "label": 4,
+ "text": "Title: Learning One More Thing \nAbstract: Most research on machine learning has focused on scenarios in which a learner faces a single, isolated learning task. The lifelong learning framework assumes that the learner encounters a multitude of related learning tasks over its lifetime, providing the opportunity for the transfer of knowledge among these. This paper studies lifelong learning in the context of binary classification. It presents the invariance approach, in which knowledge is transferred via a learned model of the invariances of the domain. Results on learning to recognize objects from color images demonstrate superior generalization capabilities if invariances are learned and used to bias subsequent learning.",
+ "neighbors": [
+ 1024
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1307,
+ "label": 3,
+ "text": "Title: Backfitting in Smoothing Spline ANOVA \nAbstract: A scheme to compute smoothing spline ANOVA estimates for large data sets with a (near) tensor-product structure is proposed. Such data sets are common in spatial-temporal analysis and image analysis. This scheme combines backfitting algorithm with iterative imputation algorithm in order to save both computational space and time. The convergence of this algorithm and various ways to further speed it up, such as collapsing component functions and successive over-relaxation, are discussed. Issues related to its application in spatial-temporal analysis are discussed too. An application to a global analysis of historical surface temperature data is described. ",
+ "neighbors": [
+ 222,
+ 298,
+ 1244
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1308,
+ "label": 5,
+ "text": "Title: Lookahead and Discretization in ILP \nAbstract: We present and evaluate two methods for improving the performance of ILP systems. One of them is discretization of numerical attributes, based on Fayyad and Irani's text [9], but adapted and extended in such a way that it can cope with some aspects of discretization that only occur in relational learning problems (when indeterminate literals occur). The second technique is lookahead. It is a well-known problem in ILP that a learner cannot always assess the quality of a refinement without knowing which refinements will be enabled afterwards, i.e. without looking ahead in the refinement lattice. We present a simple method for specifying when lookahead is to be used, and what kind of lookahead is interesting. Both the discretization and lookahead techniques are evaluated experimentally. The results show that both techniques improve the quality of the induced theory, while computational costs are acceptable.",
+ "neighbors": [
+ 1120,
+ 1251
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1309,
+ "label": 2,
+ "text": "Title: Can Recurrent Neural Networks Learn Natural Language Grammars? W&Z recurrent neural networks are able to\nAbstract: Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. We consider the task of grammatical inference with recurrent neural networks. Specifically, we consider the task of classifying natural language sentences as grammatical or ungrammatical can a recurrent neural network be made to exhibit the same kind of discriminatory power which is provided by the Principles and Parameters linguistic framework, or Government and Binding theory? We attempt to train a network, without the bifurcation into learned vs. innate components assumed by Chomsky, to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. We consider how a recurrent neural network could possess linguistic capability, and investigate the properties of Elman, Narendra & Parthasarathy (N&P) and Williams & Zipser (W&Z) recurrent networks, and Frasconi-Gori-Soda (FGS) locally recurrent networks in this setting. We show that both ",
+ "neighbors": [
+ 233
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1310,
+ "label": 0,
+ "text": "Title: Improved Heterogeneous Distance Functions \nAbstract: Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.",
+ "neighbors": [
+ 1175
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1311,
+ "label": 1,
+ "text": "Title: Duplication of Coding Segments in Genetic Programming \nAbstract: Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains.",
+ "neighbors": [
+ 497,
+ 552,
+ 691,
+ 693,
+ 910,
+ 1205
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1312,
+ "label": 1,
+ "text": "Title: Evolution of Iteration in Genetic Programming D a v d A The solution to many\nAbstract: This paper introduces the new operation of restricted iteration creation that automatically Genetic programming extends Holland's genetic algorithm to the task of automatic programming. Early work on genetic programming demonstrated that it is possible to evolve a sequence of work-performing steps in a single result-producing branch (that is, a one-part \"main\" program). The book Genetic Programming: On the Programming of Computers by Means of Natural Selection (Koza 1992) describes an extension of Holland's genetic algorithm in which the genetic population consists of computer programs (that is, compositions of primitive functions and terminals). See also Koza and Rice (1992). In the most basic form of genetic programming (where only a single result-producing branch is evolved), genetic programming demonstrated the capability to discover a sequence (as to both its length and its content) of work-performing steps that is sufficient to produce a satisfactory solution to several problems, including many problems that have been used over the years as benchmarks in machine learning and artificial intelligence. Before applying genetic programming to a problem, the user must perform five major preparatory steps, namely identifying the terminals (inputs) of the to-be-evolved programs, identifying the primitive functions (operations) contained in the to-be-evolved programs, creating the fitness measure for evaluating how well a given program does at solving the problem at hand, choosing certain control parameters (notably population size and number of generations to be run), and determining the termination criterion and method of result designation (typically the best-so-far individual from the populations produced during the run). creates a restricted iteration-performing",
+ "neighbors": [
+ 91,
+ 300,
+ 788,
+ 1161
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1313,
+ "label": 2,
+ "text": "Title: Pointer Adaptation and Pruning of Min-Max Fuzzy Inference and Estimation \nAbstract: This paper describes a partial-memory incremental learning method based on the AQ15c inductive learning system. The method maintains a representative set of past training examples that are used together with new examples to appropriately modify the currently held hypotheses. Incremental learning is evoked by feedback from the environment or from the user. Such a method is useful in applications involving intelligent agents acting in a changing environment, active vision, and dynamic knowledge-bases. For this study, the method is applied to the problem of computer intrusion detection in which symbolic profiles are learned for a computer systems users. In the experiments, the proposed method yielded significant gains in terms of learning time and memory requirements at the expense of slightly lower predictive accuracy and higher concept complexity, when compared to batch learning, in which all examples are given at once. ",
+ "neighbors": [
+ 967
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1314,
+ "label": 1,
+ "text": "Title: Empirical studies of the genetic algorithm with non-coding segments \nAbstract: The genetic algorithm (GA) is a problem solving method that is modelled after the process of natural selection. We are interested in studying a specific aspect of the GA: the effect of non-coding segments on GA performance. Non-coding segments are segments of bits in an individual that provide no contribution, positive or negative, to the fitness of that individual. Previous research on non-coding segments suggests that including these structures in the GA may improve GA performance. Understanding when and why this improvement occurs will help us to use the GA to its full potential. In this article, we discuss our hypotheses on non-coding segments and describe the results of our experiments. The experiments may be separated into two categories: testing our program on problems from previous related studies, and testing new hypotheses on the effect of non-coding segments. ",
+ "neighbors": [
+ 91,
+ 93,
+ 910,
+ 1205
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1315,
+ "label": 0,
+ "text": "Title: Case-based Acquisition of User Preferences for Solution Improvement in Ill-Structured Domains \nAbstract: 1 We have developed an approach to acquire complicated user optimization criteria and use them to guide ",
+ "neighbors": [
+ 550
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1316,
+ "label": 2,
+ "text": "Title: Lending Direction to Neural Networks \nAbstract: We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values on a cyclic range, between 0 and 2 radians. This measure is appropriate to many domains, representing cyclic or angular values, e.g., wind direction, days of the week, phases of the moon. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic dubm, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We present a proof that the settling dynamics for a mean-field DUBM cause convergence to a free energy minimum. Finally, we describe a learning algorithm and simulations that demonstrate a mean-field DUBM's ability to learn interesting mappings. fl To appear in: Neural Networks.",
+ "neighbors": [
+ 1207
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1317,
+ "label": 2,
+ "text": "Title: The end of the line for a brain-damaged model of unilateral neglect \nAbstract: For over a century, it has been known that damage to the right hemisphere of the brain can cause patients to be unaware of the contralesional side of space. This condition, known as unilateral neglect, represents a collection of clinically related spatial disorders characterized by the failure in free vision to respond, explore, or orient to stimuli predominantly located on the side of space opposite the damaged hemisphere. Recent studies using the simple task of line bisection, a conventional diagnostic test, have proved surprisingly revealing with respect to the spatial and attentional impairments involved in neglect. In line bisection, the patient is asked to mark the midpoint of a thin horizontal line on a sheet of paper. Neglect patients generally transect far to the right of the center. Extensive studies of line bisection have been conducted, manipulating|among other factors|line length, orientation, and position. We have simulated the pattern of results using an existing computational model of visual perception and selective attention called morsel (Mozer, 1991). morsel has already been used to model data in a related disorder, neglect dyslexia (Mozer & Behrmann, 1990). In this earlier work, morsel was \"lesioned\" in accordance with the damage we suppose to have occurred in the brains of ",
+ "neighbors": [
+ 969
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1318,
+ "label": 2,
+ "text": "Title: Models of Parallel Adaptive Logic \nAbstract: This paper overviews a proposed architecture for adaptive parallel logic referred to as ASOCS (Adaptive Self-Organizing Concurrent System). The ASOCS approach is based on an adaptive network composed of many simple computing elements which operate in a parallel asynchronous fashion. Problem specification is given to the system by presenting if-then rules in the form of boolean conjunctions. Rules are added incrementally and the system adapts to the changing rule-base. Adaptation and data processing form two separate phases of operation. During processing the system acts as a parallel hardware circuit. The adaptation process is distributed amongst the computing elements and efficiently exploits parallelism. Adaptation is done in a self-organizing fashion and takes place in time linear with the depth of the network. This paper summarizes the overall ASOCS concept and overviews three specific architectures. ",
+ "neighbors": [
+ 12
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1319,
+ "label": 0,
+ "text": "Title: PRONOUNCING NAMES BY A COMBINATION OF RULE-BASED AND CASE-BASED REASONING \nAbstract: We describe the design and tuning of a controller for enforcing compliance with a prescribed velocity profile for a rail-based transportation system. This requires following a trajectory, rather than fixed set-points (as in automobiles). We synthesize a fuzzy controller for tracking the velocity profile, while providing a smooth ride and staying within the prescribed speed limits. We use a genetic algorithm to tune the fuzzy controller's performance by adjusting its parameters (the scaling factors and the membership functions) in a sequential order of significance. We show that this approach results in a controller that is superior to the manually designed one, and with only modest computational effort. This makes it possible to customize automated tuning to a variety of different configurations of the route, the terrain, the power configuration, and the cargo. ",
+ "neighbors": [
+ 917
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1320,
+ "label": 5,
+ "text": "Title: Predicting Ordinal Classes in ILP \nAbstract: This paper is devoted to the problem of learning to predict ordinal (i.e., ordered discrete) classes in an ILP setting. We start with a relational regression algorithm named SRT (Structural Regression Trees) and study various ways of transforming it into a first-order learner for ordinal classification tasks. Combinations of these algorithm variants with several data preprocessing methods are compared on two ILP benchmark data sets to verify the relative strengths and weaknesses of the strategies and to study the trade-off between optimal categorical classification accuracy (hit rate) and minimum distance-based error. Preliminary results indicate that this is a promising avenue towards algorithms that combine aspects of classification and regression in relational learning.",
+ "neighbors": [
+ 128,
+ 198,
+ 716,
+ 796,
+ 1108
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1321,
+ "label": 6,
+ "text": "Title: Mistake-Driven Learning in Text Categorization \nAbstract: Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms which categorize documents by learning a linear separator in the feature space have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set. ",
+ "neighbors": [
+ 255,
+ 1281
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1322,
+ "label": 2,
+ "text": "Title: An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons \nAbstract: We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are \"universal approximators\" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, our fast and noise-robust implementation of sigmoidal neural nets via temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI. ",
+ "neighbors": [
+ 973,
+ 1058
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1323,
+ "label": 3,
+ "text": "Title: Monte Carlo Approach to Bayesian Regression Modeling \nAbstract: In the framework of a functional response model (i.e. a regression model, or a feedforward neural network) an estimator of a nonlinear response function is constructed from a set of functional units. The parameters defining these functional units are estimated using the Bayesian approach. A sample representing the Bayesian posterior distribution is obtained by applying the Markov chain Monte Carlo procedure, namely the combination of Gibbs and Metropolis-Hastings algorithms. The method is described for histogram, B-spline and radial basis function estimators of a response function. In general, the proposed approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. We illustrate the method on numerical examples. ",
+ "neighbors": [],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1324,
+ "label": 2,
+ "text": "Title: Information Processing in Primate Retinal Cone Pathways: A Model \nAbstract: In the framework of a functional response model (i.e. a regression model, or a feedforward neural network) an estimator of a nonlinear response function is constructed from a set of functional units. The parameters defining these functional units are estimated using the Bayesian approach. A sample representing the Bayesian posterior distribution is obtained by applying the Markov chain Monte Carlo procedure, namely the combination of Gibbs and Metropolis-Hastings algorithms. The method is described for histogram, B-spline and radial basis function estimators of a response function. In general, the proposed approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. We illustrate the method on numerical examples. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1325,
+ "label": 1,
+ "text": "Title: A Comparison between Cellular Encoding and Direct Encoding for Genetic Neural Networks \nAbstract: This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input.",
+ "neighbors": [
+ 675,
+ 1043,
+ 1185,
+ 1249
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1326,
+ "label": 4,
+ "text": "Title: Towards a Reactive Critic \nAbstract: In this paper we propose a reactive critic, that is able to respond to changing situations. We will explain why this is usefull in reinforcement learning, where the critic is used to improve the control strategy. We take a problem for which we can derive the solution analytically. This enables us to investigate the relation between the parameters and the resulting approximations of the critic. We will also demonstrate how the reactive critic reponds to changing situations.",
+ "neighbors": [
+ 327,
+ 426
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1327,
+ "label": 2,
+ "text": "Title: Nonlinear Resonance in Neuron Dynamics in Statistical Mechanics and Complex Systems \nAbstract: Hubler's technique using aperiodic forces to drive nonlinear oscillators to resonance is analyzed. The oscillators being examined are effective neurons that model Hopfield neural networks. The method is shown to be valid under several different circumstances. It is verified through analysis of the power spectrum, force, resonance, and energy transfer of the system. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1328,
+ "label": 2,
+ "text": "Title: The Role of Activity in Synaptic Competition at the Neuromuscular Junction \nAbstract: An extended version of the dual constraint model of motor end-plate morphogenesis is presented that includes activity dependent and independent competition. It is supported by a wide range of recent neurophysiological evidence that indicates a strong relationship between synaptic efficacy and survival. The computational model is justified at the molecular level and its predictions match the developmental and regenerative behaviour of real synapses.",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1329,
+ "label": 1,
+ "text": "Title: A Computational Environment for Exhaust Nozzle Design \nAbstract: The Nozzle Design Associate (NDA) is a computational environment for the design of jet engine exhaust nozzles for supersonic aircraft. NDA may be used either to design new aircraft or to design new nozzles that adapt existing aircraft so they may be reutilized for new missions. NDA was developed in a collaboration between computer scientists at Rut-gers University and exhaust nozzle designers at General Electric Aircraft Engines and General Electric Corporate Research and Development. The NDA project has two principal goals: to provide a useful engineering tool for exhaust nozzle design, and to explore fundamental research issues that arise in the application of automated design optimization methods to realistic engineering problems. ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1330,
+ "label": 2,
+ "text": "Title: New Modes of Generalization in Perceptual Learning \nAbstract: The learning of many visual perceptual tasks, such as motion discrimination, has been shown to be specific to the practiced stimulus, and new stimuli require re-learning from scratch [1-6]. This specificity, found in so many different tasks, supports the hypothesis that perceptual learning takes place in early visual cortical areas. In contrast, using a novel paradigm in motion discrimination where learning has been shown to be specific, we found generalization: We trained subjects to discriminate the directions of moving dots, and verified that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Moreover, after mastering the task with an easy stimulus, subjects who had practiced briefly to discriminate the easy stimulus in a new direction generalized to a difficult stimulus in that direction. This generalization demanded both the mastering and the brief practice. Thus learning in motion discrimination always generalizes to new stimuli. Learning is manifested in various forms: acceleration of learning rate, indirect transfer, or direct transfer [7, 8]. These results challenge existing theories of perceptual learning, and suggest a more complex picture in which learning takes place at multiple levels. Learning in biological systems is of great importance. But while cognitive learning (or \"problem solving\") is abrupt and generalizes to analogous problems, we appear to acquire our perceptual skills gradually and specifically: human subjects cannot generalize a perceptual discrimination skill to solve similar problems with different attributes. For example, in a discrimination task as described in Fig. 1, a subject who is trained to discriminate motion directions between 43:5 ffi and 46:5 ffi cannot use this skill to discriminate 133:5 ffi from 136:5 ffi . 1 Such specificity supports the hypothesis that perceptual learning embodies neuronal modifications in the brain's stimulus-specific cortical areas (e.g., visual area MT) [1-6]. In contrast to previous results of specificity, we will show, in three experiments, that learning in motion discrimination always generalizes. (1) When the task is easy, it generalizes to all directions after training in ",
+ "neighbors": [],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1331,
+ "label": 6,
+ "text": "Title: Bayesian Induction of Features in Temporal Domains \nAbstract: Most concept induction algorithms process concept instances described in terms of properties that remain constant over time. In temporal domains, instances are best described in terms of properties whose values vary with time. Data engineering is called upon in temporal domains to transform the raw data into an appropriate form for concept induction. I investigate a method for inducing features suitable for classifying finite, univariate, time series that are governed by unknown deterministic processes contaminated by noise. In a supervised setting, I induce piecewise polynomials of appropriate complexity to characterize the data in each class, using Bayesian model induction principles. In this study, I evaluate the proposed method empirically in a semi-deterministic domain: the waveform classification problem, originally presented in the CART book. I compared the classification accuracy of the proposed algorithm to the accuracy attained by C4.5 under various noise levels. Feature induction improved the classification accuracy in noisy situations, but degraded it when there was no noise. The results demonstrate the value of the proposed method in the presence of noise, and reveal a weakness shared by all classifiers using generative rather than discriminative models: sensitivity to model inaccuracies. ",
+ "neighbors": [
+ 1121
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1332,
+ "label": 5,
+ "text": "Title: Limits of Control Flow on Parallelism \nAbstract: This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative execution. We evaluate these techniques by using trace simulations to find the limits of parallelism for machines that employ different combinations of these techniques. We have three major results. First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program. Second, a superscalar processor is fundamentally limited because it cannot execute independent regions of code concurrently. Higher performance can be obtained with machines, such as multiprocessors and dataflow machines, that can simultaneously follow multiple flows of control. Finally, without speculative execution to allow instructions to execute before their control dependences are resolved, only modest amounts of parallelism can be obtained for programs with complex control flow. ",
+ "neighbors": [
+ 142,
+ 423,
+ 1112
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1333,
+ "label": 6,
+ "text": "Title: On the Sample Complexity of Weakly Learning \nAbstract: In this paper, we study the sample complexity of weak learning. That is, we ask how much data must be collected from an unknown distribution in order to extract a small but significant advantage in prediction. We show that it is important to distinguish between those learning algorithms that output deterministic hypotheses and those that output randomized hypotheses. We prove that in the weak learning model, any algorithm using deterministic hypotheses to weakly learn a class of Vapnik-Chervonenkis dimension d(n) requires ( d(n)) examples. In contrast, when randomized hypotheses are allowed, we show that fi(1) examples suffice in some cases. We then show that there exists an efficient algorithm using deterministic hypotheses that weakly learns against any distribution on a set of size d(n) with only O(d(n) 2=3 ) examples. Thus for the class of symmetric Boolean functions over n variables, where the strong learning sample complexity is fi(n), the sample complexity for weak learning using deterministic hypotheses is ( n) and O(n 2=3 ), and the sample complexity for weak learning using randomized hypotheses is fi(1). Next we prove the existence of classes for which the distribution-free sample size required to obtain a slight advantage in prediction over random guessing is essentially equal to that required to obtain arbitrary accuracy. Finally, for a class of small circuits, namely all parity functions of subsets of n Boolean variables, we prove a weak learning sample complexity of fi(n). This bound holds even if the weak learning algorithm is allowed to replace random sampling with membership queries, and the target distribution is uniform on f0; 1g n . p",
+ "neighbors": [
+ 257,
+ 392,
+ 767,
+ 1082
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1334,
+ "label": 1,
+ "text": "Title: Adaptation of Genetic Algorithms for Engineering Design Optimization \nAbstract: Genetic algorithms have been extensively used in different domains as a means of doing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains it was observed that a simple classical implementation of the GA based on binary encoding and bit mutation and crossover was sometimes inefficient and unable to reach the global optimum. Using floating point representation alone does not eliminate the problem. In this paper we describe a way of augmenting the GA with new operators and strategies that take advantage of the structure and properties of such engineering design domains. Empirical results (initially in the domain of conceptual design of supersonic transport aircraft and the domain of high performance supersonic missile inlet design) demonstrate that the newly formulated GA can be significantly better than the classical GA in terms of efficiency and reliability. http://www.cs.rutgers.edu/~shehata/papers.html",
+ "neighbors": [
+ 91,
+ 428,
+ 1084
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1335,
+ "label": 3,
+ "text": "Title: Discovering Structure in Continuous Variables Using Bayesian Networks \nAbstract: We study Bayesian networks for continuous variables using nonlinear conditional density estimators. We demonstrate that useful structures can be extracted from a data set in a self-organized way and we present sampling techniques for belief update based on ",
+ "neighbors": [
+ 321,
+ 336,
+ 1044
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1336,
+ "label": 2,
+ "text": "Title: Efficient Visual Search: A Connectionist Solution \nAbstract: Searching for objects in scenes is a natural task for people and has been extensively studied by psychologists. In this paper we examine this task from a connectionist perspective. Computational complexity arguments suggest that parallel feed-forward networks cannot perform this task efficiently. One difficulty is that, in order to distinguish the target from distractors, a combination of features must be associated with a single object. Often called the binding problem, this requirement presents a serious hurdle for connectionist models of visual processing when multiple objects are present. Psychophysical experiments suggest that people use covert visual attention to get around this problem. In this paper we describe a psychologically plausible system which uses a focus of attention mechanism to locate target objects. A strategy that combines top-down and bottom-up information is used to minimize search time. The behavior of the resulting system matches the reaction time behavior of people in several interesting tasks. ",
+ "neighbors": [
+ 303
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1337,
+ "label": 1,
+ "text": "Title: Coevolving Communicative Behavior in a Linear Pursuer-Evader Game \nAbstract: The pursuer-evader (PE) game is recognized as an important domain in which to study the coevolution of robust adaptive behavior and protean behavior (Miller and Cliff, 1994). Nevertheless, the potential of the game is largely unrealized due to methodological hurdles in coevolutionary simulation raised by PE; versions of the game that have optimal solutions (Isaacs, 1965) are closed-ended, while other formulations are opaque with respect to their solution space, for the lack of a rigorous metric of agent behavior. This inability to characterize behavior, in turn, obfuscates coevolutionary dynamics. We present a new formulation of PE that affords a rigorous measure of agent behavior and system dynamics. The game is moved from the two-dimensional plane to the one-dimensional bit-string; at each time step, the evader generates a bit that the pursuer must simultaneously predict. Because behavior is expressed as a time series, we can employ information theory to provide quantitative analysis of agent activity. Further, this version of PE opens vistas onto the communicative component of pursuit and evasion behavior, providing an open-ended serial communications channel and an open world (via coevolution). Results show that subtle changes to our game determine whether it is open-ended, and profoundly affect the viability of arms-race dynamics. ",
+ "neighbors": [
+ 107,
+ 234,
+ 413
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1338,
+ "label": 1,
+ "text": "Title: Biological metaphors and the design of modular artificial neural networks Master's thesis of \nAbstract: In this paper we describe a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. Only local information is used at each node to keep accurate statistics on which routing policies lead to minimal routing times. In simple experiments involving a 36-node irregularly-connected network, this learning approach proves superior to routing based on precomputed shortest paths.",
+ "neighbors": [
+ 91,
+ 1194,
+ 1231,
+ 1249
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1339,
+ "label": 2,
+ "text": "Title: Comparing Methods for Refining Certainty-Factor Rule-Bases \nAbstract: This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the Rapture system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the Kbann system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.",
+ "neighbors": [
+ 88,
+ 759,
+ 1098,
+ 1290
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1340,
+ "label": 6,
+ "text": "Title: CONSTRUCTING CONJUNCTIVE TESTS FOR DECISION TREES \nAbstract: This paper discusses an approach of constructing new attributes based on decision trees and production rules. It can improve the concepts learned in the form of decision trees by simplifying them and improving their predictive accuracy. In addition, this approach can distinguish relevant primitive attributes from irrelevant primitive attributes. ",
+ "neighbors": [
+ 892,
+ 1008,
+ 1057
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1341,
+ "label": 2,
+ "text": "Title: Models of perceptual learning in vernier hyperacuity \nAbstract: Performance of human subjects in a wide variety of early visual processing tasks improves with practice. HyperBF networks (Poggio and Girosi, 1990) constitute a mathematically well-founded framework for understanding such improvement in performance, or perceptual learning, in the class of tasks known as visual hyperacuity. The present article concentrates on two issues raised by the recent psychophysical and computational findings reported in (Poggio et al., 1992b; Fahle and Edelman, 1992). First, we develop a biologically plausible extension of the HyperBF model that takes into account basic features of the functional architecture of early vision. Second, we explore various learning modes that can coexist within the HyperBF framework and focus on two unsupervised learning rules which may be involved in hyperacuity learning. Finally, we report results of psychophysical experiments that are consistent with the hypothesis that activity-dependent presynaptic amplification may be involved in perceptual learning in hyperacuity. ",
+ "neighbors": [
+ 357
+ ],
+ "mask": "Test"
+ },
+ {
+ "node_id": 1342,
+ "label": 3,
+ "text": "Title: Mining and Model Simplicity: A Case Study in Diagnosis \nAbstract: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models. ",
+ "neighbors": [
+ 751,
+ 884,
+ 1032,
+ 1208
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1343,
+ "label": 2,
+ "text": "Title: Egocentric spatial representation in early vision \nAbstract: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1344,
+ "label": 2,
+ "text": "Title: Regression with Input-dependent Noise: A Gaussian Process Treatment \nAbstract: Technical Report NCRG/98/002, available from http://www.ncrg.aston.ac.uk/ To appear in Advances in Neural Information Processing Systems 10 eds. M. I. Jordan, M. J. Kearns and S. A. Solla. Lawrence Erlbaum (1998). Abstract Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.",
+ "neighbors": [
+ 43,
+ 89
+ ],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1345,
+ "label": 2,
+ "text": "Title: INVERSION IN TIME \nAbstract: Inversion of multilayer synchronous networks is a method which tries to answer questions like \"What kind of input will give a desired output?\" or \"Is it possible to get a desired output (under special input/output constraints)?\". We will describe two methods of inverting a connectionist network. Firstly, we extend inversion via backpropagation (Linden/Kindermann [4], Williams [11]) to recurrent (El-man [1], Jordan [3], Mozer [5], Williams/Zipser [10]), time-delayed (Waibel at al. [9]) and discrete versions of continuous networks (Pineda [7], Pearlmutter [6]). The result of inversion is an input vector. The corresponding output vector is equal to the target vector except a small remainder. The knowledge of those attractors may help to understand the function and the generalization qualities of connectionist systems of this kind. Secondly, we introduce a new inversion method for proving the non-existence of an input combination under special constraints, e.g. in a subspace of the input space. This method works by iterative exclusion of invalid activation values. It might be a helpful way to judge the properties of a trained network. We conclude with simulation results of three different tasks: XOR, morse signal decoding and handwritten digit recognition. ",
+ "neighbors": [],
+ "mask": "Validation"
+ },
+ {
+ "node_id": 1346,
+ "label": 4,
+ "text": "Title: ALECSYS and the AutonoMouse: Learning to Control a Real Robot by Distributed Classifier Systems \nAbstract: Chaque parametre du modele est penalise individuellement. Le reglage de ces penalisations se fait automatiquement a partir de la definition d'un hyperparametre de regularisation globale. Cet hyperparametre, qui controle la complexite du regresseur, peut ^etre estime par des techniques de reechantillonnage. Nous montrons experimentalement les performances et la stabilite de la penalisation multiple adaptative dans le cadre de la regression lineaire. Nous avons choisi des problemes pour lesquels le probleme du controle de la complexite est particulierement crucial, comme dans le cadre plus general de l'estimation fonctionnelle. Les comparaisons avec les moindres carres regularises et la selection de variables nous permettent de deduire les conditions d'application de chaque algorithme de penalisation. Lors des simulations, nous testons egalement plusieurs techniques de reechantillonnage. Ces techniques sont utilisees pour selectionner la complexite optimale des estimateurs de la fonction de regression. Nous comparons les pertes occasionnees par chacune d'entre elles lors de la selection de modeles sous-optimaux. Nous regardons egalement si elles permettent de determiner l'estimateur de la fonction de regression minimisant l'erreur en generalisation parmi les differentes methodes de penalisation en competition. ",
+ "neighbors": [
+ 372,
+ 1144
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1347,
+ "label": 1,
+ "text": "Title: An Adverse Interaction between the Crossover Operator and a Restriction on Tree Depth of Crossover\nAbstract: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets. ",
+ "neighbors": [
+ 978
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1348,
+ "label": 6,
+ "text": "Title: A Polynomial Time Incremental Algorithm for Regular Grammar Inference \nAbstract: This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation. ",
+ "neighbors": [
+ 1349
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1349,
+ "label": 6,
+ "text": "Title: Learning DFA from Simple Examples \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
+ "neighbors": [
+ 54,
+ 392,
+ 1348
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1350,
+ "label": 3,
+ "text": "Title: Belief Maintenance with Probabilistic Logic \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
+ "neighbors": [
+ 968,
+ 1351
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1351,
+ "label": 3,
+ "text": "Title: Forecasting Glucose Concentration in Diabetic Patients using Ignorant Belief Networks \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
+ "neighbors": [
+ 968,
+ 1350
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1352,
+ "label": 2,
+ "text": "Title: Simple Synchrony Networks Learning to Parse Natural Language with Temporal Synchrony Variable Binding \nAbstract: The Simple Synchrony Network (SSN) is a new connectionist architecture, incorporating the insights of Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The use of TSVB means SSNs can output representations of structures, and can learn generalisations over the constituents of these structures (as required by systematicity). This paper describes the SSN and an associated training algorithm, and demonstrates SSNs' generalisation abilities through results from training SSNs to parse real natural language sentences. ",
+ "neighbors": [
+ 1173,
+ 1179
+ ],
+ "mask": "Train"
+ },
+ {
+ "node_id": 1353,
+ "label": 1,
+ "text": "Title: The MAX Problem for Genetic Programming Highlighting an Adverse Interaction between the Crossover Operator and\nAbstract: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets. Some characteristics and inadequacies of Crossover in `normal' use are highlighted and discussed. Subtree discovery and movement takes place mostly near the leaf nodes, with nodes near the root left untouched. Diversity drops quickly to zero near the root node in the tree population. GP is then unable to create `fitter' trees via the crossover operator, leaving a Mutation operator as the only common, but ineffective, route to discovery of `fitter' trees. ",
+ "neighbors": [
+ 978
+ ],
+ "mask": "Train"
+ }
+]
\ No newline at end of file