id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/9603104 | null | D. A. Cohn, Z. Ghahramani, M. I. Jordan | Active Learning with Statistical Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
129-145 | null | null | cs.AI | null | For many types of machine learning algorithms, one can compute the
statistically `optimal' way to select training data. In this paper, we review
how optimal data selection techniques have been used with feedforward neural
networks. We then show how the same principles may be used to select data for
two alternative, statistically-based learning architectures: mixtures of
Gaussians and locally weighted regression. While the techniques for neural
networks are computationally expensive and approximate, the techniques for
mixtures of Gaussians and locally weighted regression are both efficient and
accurate. Empirically, we observe that the optimality criterion sharply
decreases the number of training examples the learner needs in order to achieve
good performance.
| [
{
"version": "v1",
"created": "Fri, 1 Mar 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Cohn",
"D. A.",
""
],
[
"Ghahramani",
"Z.",
""
],
[
"Jordan",
"M. I.",
""
]
] |
cs/9604101 | null | T. Walsh | A Divergence Critic for Inductive Proof | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
209-235 | null | null | cs.AI | null | Inductive theorem provers often diverge. This paper describes a simple
critic, a computer program which monitors the construction of inductive proofs
attempting to identify diverging proof attempts. Divergence is recognized by
means of a ``difference matching'' procedure. The critic then proposes lemmas
and generalizations which ``ripple'' these differences away so that the proof
can go through without divergence. The critic enables the theorem prover Spike
to prove many theorems completely automatically from the definitions alone.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Walsh",
"T.",
""
]
] |
cs/9604102 | null | E. Marchiori | Practical Methods for Proving Termination of General Logic Programs | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
179-208 | null | null | cs.AI | null | Termination of logic programs with negated body atoms (here called general
logic programs) is an important topic. One reason is that many computational
mechanisms used to process negated atoms, like Clark's negation as failure and
Chan's constructive negation, are based on termination conditions. This paper
introduces a methodology for proving termination of general logic programs
w.r.t. the Prolog selection rule. The idea is to distinguish parts of the
program depending on whether or not their termination depends on the selection
rule. To this end, the notions of low-, weakly up-, and up-acceptable program
are introduced. We use these notions to develop a methodology for proving
termination of general logic programs, and show how interesting problems in
non-monotonic reasoning can be formalized and implemented by means of
terminating general logic programs.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Marchiori",
"E.",
""
]
] |
cs/9604103 | null | D. Fisher | Iterative Optimization and Simplification of Hierarchical Clusterings | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
147-178 | null | null | cs.AI | null | Clustering is often used for discovering structure in data. Clustering
systems differ in the objective function used to evaluate clustering quality
and the control strategy used to search the space of clusterings. Ideally, the
search strategy should consistently construct clusterings of high quality, but
be computationally inexpensive as well. In general, we cannot have it both
ways, but we can partition the search so that a system inexpensively constructs
a `tentative' clustering for initial examination, followed by iterative
optimization, which continues to search in background for improved clusterings.
Given this motivation, we evaluate an inexpensive strategy for creating initial
clusterings, coupled with several control strategies for iterative
optimization, each of which repeatedly modifies an initial clustering in search
of a better one. One of these methods appears novel as an iterative
optimization strategy in clustering contexts. Once a clustering has been
constructed it is judged by analysts -- often according to task-specific
criteria. Several authors have abstracted these criteria and posited a generic
performance task akin to pattern completion, where the error rate over
completed patterns is used to `externally' judge clustering utility. Given this
performance task, we adapt resampling-based pruning strategies used by
supervised learning systems to the task of simplifying hierarchical
clusterings, thus promising to ease post-clustering analysis. Finally, we
propose a number of objective functions, based on attribute-selection measures
for decision-tree induction, that might perform well on the error rate and
simplicity dimensions.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Fisher",
"D.",
""
]
] |
cs/9605101 | null | G. I. Webb | Further Experimental Evidence against the Utility of Occam's Razor | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 4, (1996),
397-417 | null | null | cs.AI | null | This paper presents new experimental evidence against the utility of Occam's
razor. A~systematic procedure is presented for post-processing decision trees
produced by C4.5. This procedure was derived by rejecting Occam's razor and
instead attending to the assumption that similar objects are likely to belong
to the same class. It increases a decision tree's complexity without altering
the performance of that tree on the training data from which it is inferred.
The resulting more complex decision trees are demonstrated to have, on average,
for a variety of common learning tasks, higher predictive accuracy than the
less complex original decision trees. This result raises considerable doubt
about the utility of Occam's razor as it is commonly applied in modern machine
learning.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Webb",
"G. I.",
""
]
] |
cs/9605102 | null | S. H. Nienhuys-Cheng, R. deWolf | Least Generalizations and Greatest Specializations of Sets of Clauses | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
341-363 | null | null | cs.AI | null | The main operations in Inductive Logic Programming (ILP) are generalization
and specialization, which only make sense in a generality order. In ILP, the
three most important generality orders are subsumption, implication and
implication relative to background knowledge. The two languages used most often
are languages of clauses and languages of only Horn clauses. This gives a total
of six different ordered languages. In this paper, we give a systematic
treatment of the existence or non-existence of least generalizations and
greatest specializations of finite sets of clauses in each of these six ordered
sets. We survey results already obtained by others and also contribute some
answers of our own. Our main new results are, firstly, the existence of a
computable least generalization under implication of every finite set of
clauses containing at least one non-tautologous function-free clause (among
other, not necessarily function-free clauses). Secondly, we show that such a
least generalization need not exist under relative implication, not even if
both the set that is to be generalized and the background knowledge are
function-free. Thirdly, we give a complete discussion of existence and
non-existence of greatest specializations in each of the six ordered languages.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Nienhuys-Cheng",
"S. H.",
""
],
[
"deWolf",
"R.",
""
]
] |
cs/9605103 | null | L. P. Kaelbling, M. L. Littman, A. W. Moore | Reinforcement Learning: A Survey | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
237-285 | null | null | cs.AI | null | This paper surveys the field of reinforcement learning from a
computer-science perspective. It is written to be accessible to researchers
familiar with machine learning. Both the historical basis of the field and a
broad selection of current work are summarized. Reinforcement learning is the
problem faced by an agent that learns behavior through trial-and-error
interactions with a dynamic environment. The work described here has a
resemblance to work in psychology, but differs considerably in the details and
in the use of the word ``reinforcement.'' The paper discusses central issues of
reinforcement learning, including trading off exploration and exploitation,
establishing the foundations of the field via Markov decision theory, learning
from delayed reinforcement, constructing empirical models to accelerate
learning, making use of generalization and hierarchy, and coping with hidden
state. It concludes with a survey of some implemented systems and an assessment
of the practical utility of current methods for reinforcement learning.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Kaelbling",
"L. P.",
""
],
[
"Littman",
"M. L.",
""
],
[
"Moore",
"A. W.",
""
]
] |
cs/9605104 | null | J. Gratch, S. Chien | Adaptive Problem-solving for Large-scale Scheduling Problems: A Case
Study | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
365-396 | null | null | cs.AI | null | Although most scheduling problems are NP-hard, domain specific techniques
perform well in practice but are quite expensive to construct. In adaptive
problem-solving solving, domain specific knowledge is acquired automatically
for a general problem solver with a flexible control architecture. In this
approach, a learning system explores a space of possible heuristic methods for
one well-suited to the eccentricities of the given domain and problem
distribution. In this article, we discuss an application of the approach to
scheduling satellite communications. Using problem distributions based on
actual mission requirements, our approach identifies strategies that not only
decrease the amount of CPU time required to produce schedules, but also
increase the percentage of problems that are solvable within computational
resource limitations.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Gratch",
"J.",
""
],
[
"Chien",
"S.",
""
]
] |
cs/9605105 | null | P. Tadepalli, B. K. Natarajan | A Formal Framework for Speedup Learning from Problems and Solutions | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
445-475 | null | null | cs.AI | null | Speedup learning seeks to improve the computational efficiency of problem
solving with experience. In this paper, we develop a formal framework for
learning efficient problem solving from random problems and their solutions. We
apply this framework to two different representations of learned knowledge,
namely control rules and macro-operators, and prove theorems that identify
sufficient conditions for learning in each representation. Our proofs are
constructive in that they are accompanied with learning algorithms. Our
framework captures both empirical and explanation-based speedup learning in a
unified fashion. We illustrate our framework with implementations in two
domains: symbolic integration and Eight Puzzle. This work integrates many
strands of experimental and theoretical work in machine learning, including
empirical learning of control rules, macro-operator learning, Explanation-Based
Learning (EBL), and Probably Approximately Correct (PAC) Learning.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Tadepalli",
"P.",
""
],
[
"Natarajan",
"B. K.",
""
]
] |
cs/9605106 | null | L. Pryor, G. Collins | 2Planning for Contingencies: A Decision-based Approach | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
287-339 | null | null | cs.AI | null | A fundamental assumption made by classical AI planners is that there is no
uncertainty in the world: the planner has full knowledge of the conditions
under which the plan will be executed and the outcome of every action is fully
predictable. These planners cannot therefore construct contingency plans, i.e.,
plans in which different actions are performed in different circumstances. In
this paper we discuss some issues that arise in the representation and
construction of contingency plans and describe Cassandra, a partial-order
contingency planner. Cassandra uses explicit decision-steps that enable the
agent executing the plan to decide which plan branch to follow. The
decision-steps in a plan result in subgoals to acquire knowledge, which are
planned for in the same way as any other subgoals. Cassandra thus distinguishes
the process of gathering information from the process of making decisions. The
explicit representation of decisions in Cassandra allows a coherent approach to
the problems of contingent planning, and provides a solid base for extensions
such as the use of different decision-making procedures.
| [
{
"version": "v1",
"created": "Wed, 1 May 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Pryor",
"L.",
""
],
[
"Collins",
"G.",
""
]
] |
cs/9606101 | null | S. Bhansali, G. A. Kramer, T. J. Hoar | A Principled Approach Towards Symbolic Geometric Constraint Satisfaction | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 4, (1996),
419-443 | null | null | cs.AI | null | An important problem in geometric reasoning is to find the configuration of a
collection of geometric bodies so as to satisfy a set of given constraints.
Recently, it has been suggested that this problem can be solved efficiently by
symbolically reasoning about geometry. This approach, called degrees of freedom
analysis, employs a set of specialized routines called plan fragments that
specify how to change the configuration of a set of bodies to satisfy a new
constraint while preserving existing constraints. A potential drawback, which
limits the scalability of this approach, is concerned with the difficulty of
writing plan fragments. In this paper we address this limitation by showing how
these plan fragments can be automatically synthesized using first principles
about geometric bodies, actions, and topology.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Bhansali",
"S.",
""
],
[
"Kramer",
"G. A.",
""
],
[
"Hoar",
"T. J.",
""
]
] |
cs/9606102 | null | R. I. Brafman, M. Tennenholtz | On Partially Controlled Multi-Agent Systems | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996),
477-507 | null | null | cs.AI | null | Motivated by the control theoretic distinction between controllable and
uncontrollable events, we distinguish between two types of agents within a
multi-agent system: controllable agents, which are directly controlled by the
system's designer, and uncontrollable agents, which are not under the
designer's direct control. We refer to such systems as partially controlled
multi-agent systems, and we investigate how one might influence the behavior of
the uncontrolled agents through appropriate design of the controlled agents. In
particular, we wish to understand which problems are naturally described in
these terms, what methods can be applied to influence the uncontrollable
agents, the effectiveness of such methods, and whether similar methods work
across different domains. Using a game-theoretic framework, this paper studies
the design of partially controlled multi-agent systems in two contexts: in one
context, the uncontrollable agents are expected utility maximizers, while in
the other they are reinforcement learners. We suggest different techniques for
controlling agents' behavior in each domain, assess their success, and examine
their relationship.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Brafman",
"R. I.",
""
],
[
"Tennenholtz",
"M.",
""
]
] |
cs/9608103 | null | K. Yip, F. Zhao | Spatial Aggregation: Theory and Applications | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996), 1-26 | null | null | cs.AI | null | Visual thinking plays an important role in scientific reasoning. Based on the
research in automating diverse reasoning tasks about dynamical systems,
nonlinear controllers, kinematic mechanisms, and fluid motion, we have
identified a style of visual thinking, imagistic reasoning. Imagistic reasoning
organizes computations around image-like, analogue representations so that
perceptual and symbolic operations can be brought to bear to infer structure
and behavior. Programs incorporating imagistic reasoning have been shown to
perform at an expert level in domains that defy current analytic or numerical
methods. We have developed a computational paradigm, spatial aggregation, to
unify the description of a class of imagistic problem solvers. A program
written in this paradigm has the following properties. It takes a continuous
field and optional objective functions as input, and produces high-level
descriptions of structure, behavior, or control actions. It computes a
multi-layer of intermediate representations, called spatial aggregates, by
forming equivalence classes and adjacency relations. It employs a small set of
generic operators such as aggregation, classification, and localization to
perform bidirectional mapping between the information-rich field and
successively more abstract spatial aggregates. It uses a data structure, the
neighborhood graph, as a common interface to modularize computations. To
illustrate our theory, we describe the computational structure of three
implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the
spatial aggregation generic operators by mixing and matching a library of
commonly used routines.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Yip",
"K.",
""
],
[
"Zhao",
"F.",
""
]
] |
cs/9608104 | null | R. Ben-Eliyahu | A Hierarchy of Tractable Subsets for Computing Stable Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996), 27-52 | null | null | cs.AI | null | Finding the stable models of a knowledge base is a significant computational
problem in artificial intelligence. This task is at the computational heart of
truth maintenance systems, autoepistemic logic, and default logic.
Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes
of knowledge bases, Omega_1,Omega_2,..., with the following properties: first,
Omega_1 is the class of all stratified knowledge bases; second, if a knowledge
base Pi is in Omega_k, then Pi has at most k stable models, and all of them may
be found in time O(lnk), where l is the length of the knowledge base and n the
number of atoms in Pi; third, for an arbitrary knowledge base Pi, we can find
the minimum k such that Pi belongs to Omega_k in time polynomial in the size of
Pi; and, last, where K is the class of all knowledge bases, it is the case that
union{i=1 to infty} Omega_i = K, that is, every knowledge base belongs to some
class in the hierarchy.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 1996 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Ben-Eliyahu",
"R.",
""
]
] |
cs/9609101 | null | A. Gerevini, L. Schubert | Accelerating Partial-Order Planners: Some Techniques for Effective
Search Control and Pruning | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 5, (1996), 95-137 | null | null | cs.AI | null | We propose some domain-independent techniques for bringing well-founded
partial-order planners closer to practicality. The first two techniques are
aimed at improving search control while keeping overhead costs low. One is
based on a simple adjustment to the default A* heuristic used by UCPOP to
select plans for refinement. The other is based on preferring ``zero
commitment'' (forced) plan refinements whenever possible, and using LIFO
prioritization otherwise. A more radical technique is the use of operator
parameter domains to prune search. These domains are initially computed from
the definitions of the operators and the initial and goal conditions, using a
polynomial-time algorithm that propagates sets of constants through the
operator graph, starting in the initial conditions. During planning, parameter
domains can be used to prune nonviable operator instances and to remove
spurious clobbering threats. In experiments based on modifications of UCPOP,
our improved plan and goal selection strategies gave speedups by factors
ranging from 5 to more than 1000 for a variety of problems that are nontrivial
for the unmodified version. Crucially, the hardest problems gave the greatest
improvements. The pruning technique based on parameter domains often gave
speedups by an order of magnitude or more for difficult problems, both with the
default UCPOP search strategy and with our improved strategy. The Lisp code for
our techniques and for the test problems is provided in on-line appendices.
| [
{
"version": "v1",
"created": "Sun, 1 Sep 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Gerevini",
"A.",
""
],
[
"Schubert",
"L.",
""
]
] |
cs/9609102 | null | D. J. Litman | Cue Phrase Classification Using Machine Learning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996), 53-94 | null | null | cs.AI | null | Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. Correctly classifying cue phrases as discourse or
sentential is critical in natural language processing systems that exploit
discourse structure, e.g., for performing tasks such as anaphora resolution and
plan recognition. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification models from sets
of pre-classified cue phrases and their features in text and speech. Machine
learning is shown to be an effective technique for not only automating the
generation of classification models, but also for improving upon previous
results. When compared to manually derived classification models already in the
literature, the learned models often perform with higher accuracy and contain
new linguistic insights into the data. In addition, the ability to
automatically construct classification models makes it easier to comparatively
analyze the utility of alternative feature representations of the data.
Finally, the ease of retraining makes the learning approach more scalable and
flexible than manual methods.
| [
{
"version": "v1",
"created": "Sun, 1 Sep 1996 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Litman",
"D. J.",
""
]
] |
cs/9610101 | null | G. Zlotkin, J. S. Rosenschein | Mechanisms for Automated Negotiation in State Oriented Domains | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996),
163-238 | null | null | cs.AI | null | This paper lays part of the groundwork for a domain theory of negotiation,
that is, a way of classifying interactions so that it is clear, given a domain,
which negotiation mechanisms and strategies are appropriate. We define State
Oriented Domains, a general category of interaction. Necessary and sufficient
conditions for cooperation are outlined. We use the notion of worth in an
altered definition of utility, thus enabling agreements in a wider class of
joint-goal reachable situations. An approach is offered for conflict
resolution, and it is shown that even in a conflict situation, partial
cooperative steps can be taken by interacting agents (that is, agents in
fundamental conflict might still agree to cooperate up to a certain point). A
Unified Negotiation Protocol (UNP) is developed that can be used in all types
of encounters. It is shown that in certain borderline cooperative situations, a
partial cooperative agreement (i.e., one that does not achieve all agents'
goals) might be preferred by all agents, even though there exists a rational
agreement that would achieve all their goals. Finally, we analyze cases where
agents have incomplete information on the goals and worth of other agents.
First we consider the case where agents' goals are private information, and we
analyze what goal declaration strategies the agents might adopt to increase
their utility. Then, we consider the situation where the agents' goals (and
therefore stand-alone costs) are common knowledge, but the worth they attach to
their goals is private information. We introduce two mechanisms, one 'strict',
the other 'tolerant', and analyze their affects on the stability and efficiency
of negotiation outcomes.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Zlotkin",
"G.",
""
],
[
"Rosenschein",
"J. S.",
""
]
] |
cs/9610102 | null | J. R. Quinlan | Learning First-Order Definitions of Functions | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996),
139-161 | null | null | cs.AI | null | First-order learning involves finding a clause-form definition of a relation
from examples of the relation and relevant background information. In this
paper, a particular first-order learning system is modified to customize it for
finding definitions of functional relations. This restriction leads to faster
learning times and, in some cases, to definitions that have higher predictive
accuracy. Other first-order learning systems might benefit from similar
specialization.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Quinlan",
"J. R.",
""
]
] |
cs/9611101 | null | R. A Helzerman, M. P. Harper | MUSE CSP: An Extension to the Constraint Satisfaction Problem | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996),
239-288 | null | null | cs.AI | null | This paper describes an extension to the constraint satisfaction problem
(CSP) called MUSE CSP (MUltiply SEgmented Constraint Satisfaction Problem).
This extension is especially useful for those problems which segment into
multiple sets of partially shared variables. Such problems arise naturally in
signal processing applications including computer vision, speech processing,
and handwriting recognition. For these applications, it is often difficult to
segment the data in only one way given the low-level information utilized by
the segmentation algorithms. MUSE CSP can be used to compactly represent
several similar instances of the constraint satisfaction problem. If multiple
instances of a CSP have some common variables which have the same domains and
constraints, then they can be combined into a single instance of a MUSE CSP,
reducing the work required to apply the constraints. We introduce the concepts
of MUSE node consistency, MUSE arc consistency, and MUSE path consistency. We
then demonstrate how MUSE CSP can be used to compactly represent lexically
ambiguous sentences and the multiple sentence hypotheses that are often
generated by speech recognition algorithms so that grammar constraints can be
used to provide parses for all syntactically correct sentences. Algorithms for
MUSE arc and path consistency are provided. Finally, we discuss how to create a
MUSE CSP from a set of CSPs which are labeled to indicate when the same
variable is shared by more than a single CSP.
| [
{
"version": "v1",
"created": "Fri, 1 Nov 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Helzerman",
"R. A",
""
],
[
"Harper",
"M. P.",
""
]
] |
cs/9612101 | null | N. L. Zhang, D. Poole | Exploiting Causal Independence in Bayesian Network Inference | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996),
301-328 | null | null | cs.AI | null | A new method is proposed for exploiting causal independencies in exact
Bayesian network inference. A Bayesian network can be viewed as representing a
factorization of a joint probability into the multiplication of a set of
conditional probabilities. We present a notion of causal independence that
enables one to further factorize the conditional probabilities into a
combination of even smaller factors and consequently obtain a finer-grain
factorization of the joint probability. The new formulation of causal
independence lets us specify the conditional probability of a variable given
its parents in terms of an associative and commutative operator, such as
``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a
simple algorithm VE for Bayesian network inference that, given evidence and a
query variable, uses the factorization to find the posterior distribution of
the query. We show how this algorithm can be extended to exploit causal
independence. Empirical studies, based on the CPCS networks for medical
diagnosis, show that this method is more efficient than previous methods and
allows for inference in larger networks than previous algorithms.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Zhang",
"N. L.",
""
],
[
"Poole",
"D.",
""
]
] |
cs/9612102 | null | J. C. Schlimmer, P. C. Wells | Quantitative Results Comparing Three Intelligent Interfaces for
Information Capture: A Case Study Adding Name Information into an Electronic
Personal Organizer | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 5, (1996),
329-349 | null | null | cs.AI | null | Efficiently entering information into a computer is key to enjoying the
benefits of computing. This paper describes three intelligent user interfaces:
handwriting recognition, adaptive menus, and predictive fillin. In the context
of adding a personUs name and address to an electronic organizer, tests show
handwriting recognition is slower than typing on an on-screen, soft keyboard,
while adaptive menus and predictive fillin can be twice as fast. This paper
also presents strategies for applying these three interfaces to other
information collection domains.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Schlimmer",
"J. C.",
""
],
[
"Wells",
"P. C.",
""
]
] |
cs/9612103 | null | L. M. deCampos | Characterizations of Decomposable Dependency Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 5, (1996),
289-300 | null | null | cs.AI | null | Decomposable dependency models possess a number of interesting and useful
properties. This paper presents new characterizations of decomposable models in
terms of independence relationships, which are obtained by adding a single
axiom to the well-known set characterizing dependency models that are
isomorphic to undirected graphs. We also briefly discuss a potential
application of our results to the problem of learning graphical models from
data.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 1996 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"deCampos",
"L. M.",
""
]
] |
cs/9701101 | null | D. R. Wilson, T. R. Martinez | Improved Heterogeneous Distance Functions | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 6, (1997), 1-34 | null | null | cs.AI | null | Instance-based learning techniques typically handle continuous and linear
input values well, but often do not handle nominal input attributes
appropriately. The Value Difference Metric (VDM) was designed to find
reasonable distance values between nominal attribute values, but it largely
ignores continuous attributes, requiring discretization to map continuous
values into nominal values. This paper proposes three new heterogeneous
distance functions, called the Heterogeneous Value Difference Metric (HVDM),
the Interpolated Value Difference Metric (IVDM), and the Windowed Value
Difference Metric (WVDM). These new distance functions are designed to handle
applications with nominal attributes, continuous attributes, or both. In
experiments on 48 applications the new distance metrics achieve higher
classification accuracy on average than three previous distance functions on
those datasets that have both nominal and continuous attributes.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Wilson",
"D. R.",
""
],
[
"Martinez",
"T. R.",
""
]
] |
cs/9701102 | null | S. Wermter, V. Weber | SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis
Using Artificial Neural Networks | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 6, (1997), 35-85 | null | null | cs.AI | null | Previous approaches of analyzing spontaneously spoken language often have
been based on encoding syntactic and semantic knowledge manually and
symbolically. While there has been some progress using statistical or
connectionist language models, many current spoken- language systems still use
a relatively brittle, hand-coded symbolic grammar or symbolic semantic
component. In contrast, we describe a so-called screening approach for learning
robust processing of spontaneously spoken language. A screening approach is a
flat analysis which uses shallow sequences of category representations for
analyzing an utterance at various syntactic, semantic and dialog levels. Rather
than using a deeply structured symbolic analysis, we use a flat connectionist
analysis. This screening approach aims at supporting speech and language
processing by using (1) data-driven learning and (2) robustness of
connectionist networks. In order to test this approach, we have developed the
SCREEN system which is based on this new robust, learned and flat analysis. In
this paper, we focus on a detailed description of SCREEN's architecture, the
flat syntactic and semantic analysis, the interaction with a speech recognizer,
and a detailed evaluation analysis of the robustness under the influence of
noisy or incomplete input. The main result of this paper is that flat
representations allow more robust processing of spontaneous spoken language
than deeply structured representations. In particular, we show how the
fault-tolerance and learning capability of connectionist networks can support a
flat analysis for providing more robust spoken-language processing within an
overall hybrid symbolic/connectionist framework.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Wermter",
"S.",
""
],
[
"Weber",
"V.",
""
]
] |
cs/9703101 | null | G. DeGiacomo, M. Lenzerini | A Uniform Framework for Concept Definitions in Description Logics | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 6, (1997), 87-110 | null | null | cs.AI | null | Most modern formalisms used in Databases and Artificial Intelligence for
describing an application domain are based on the notions of class (or concept)
and relationship among classes. One interesting feature of such formalisms is
the possibility of defining a class, i.e., providing a set of properties that
precisely characterize the instances of the class. Many recent articles point
out that there are several ways of assigning a meaning to a class definition
containing some sort of recursion. In this paper, we argue that, instead of
choosing a single style of semantics, we achieve better results by adopting a
formalism that allows for different semantics to coexist. We demonstrate the
feasibility of our argument, by presenting a knowledge representation
formalism, the description logic muALCQ, with the above characteristics. In
addition to the constructs for conjunction, disjunction, negation, quantifiers,
and qualified number restrictions, muALCQ includes special fixpoint constructs
to express (suitably interpreted) recursive definitions. These constructs
enable the usual frame-based descriptions to be combined with definitions of
recursive data structures such as directed acyclic graphs, lists, streams, etc.
We establish several properties of muALCQ, including the decidability and the
computational complexity of reasoning, by formulating a correspondence with a
particular modal logic of programs called the modal mu-calculus.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"DeGiacomo",
"G.",
""
],
[
"Lenzerini",
"M.",
""
]
] |
cs/9704101 | null | P. Agre, I. Horswill | Lifeworld Analysis | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 6, (1997),
111-145 | null | null | cs.AI | null | We argue that the analysis of agent/environment interactions should be
extended to include the conventions and invariants maintained by agents
throughout their activity. We refer to this thicker notion of environment as a
lifeworld and present a partial set of formal tools for describing structures
of lifeworlds and the ways in which they computationally simplify activity. As
one specific example, we apply the tools to the analysis of the Toast system
and show how versions of the system with very different control structures in
fact implement a common control structure together with different conventions
for encoding task state in the positions or states of objects in the
environment.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 1997 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Agre",
"P.",
""
],
[
"Horswill",
"I.",
""
]
] |
cs/9705101 | null | A. Darwiche, G. Provan | Query DAGs: A Practical Paradigm for Implementing Belief-Network
Inference | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 6, (1997),
147-176 | null | null | cs.AI | null | We describe a new paradigm for implementing inference in belief networks,
which consists of two steps: (1) compiling a belief network into an arithmetic
expression called a Query DAG (Q-DAG); and (2) answering queries using a simple
evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a
number, or a symbol for evidence. Each leaf node of a Q-DAG represents the
answer to a network query, that is, the probability of some event of interest.
It appears that Q-DAGs can be generated using any of the standard algorithms
for exact inference in belief networks (we show how they can be generated using
clustering and conditioning algorithms). The time and space complexity of a
Q-DAG generation algorithm is no worse than the time complexity of the
inference algorithm on which it is based. The complexity of a Q-DAG evaluation
algorithm is linear in the size of the Q-DAG, and such inference amounts to a
standard evaluation of the arithmetic expression it represents. The intended
value of Q-DAGs is in reducing the software and hardware resources required to
utilize belief networks in on-line, real-world applications. The proposed
framework also facilitates the development of on-line inference on different
software and hardware platforms due to the simplicity of the Q-DAG evaluation
algorithm. Interestingly enough, Q-DAGs were found to serve other purposes:
simple techniques for reducing Q-DAGs tend to subsume relatively complex
optimization techniques for belief-network inference, such as network-pruning
and computation-caching.
| [
{
"version": "v1",
"created": "Thu, 1 May 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Darwiche",
"A.",
""
],
[
"Provan",
"G.",
""
]
] |
cs/9705102 | null | D. W. Opitz, J. W. Shavlik | Connectionist Theory Refinement: Genetically Searching the Space of
Network Topologies | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 6, (1997),
177-209 | null | null | cs.AI | null | An algorithm that learns from a set of examples should ideally be able to
exploit the available resources of (a) abundant computing power and (b)
domain-specific knowledge to improve its ability to generalize. Connectionist
theory-refinement systems, which use background knowledge to select a neural
network's topology and initial weights, have proven to be effective at
exploiting domain-specific knowledge; however, most do not exploit available
computing power. This weakness occurs because they lack the ability to refine
the topology of the neural networks they produce, thereby limiting
generalization, especially when given impoverished domain theories. We present
the REGENT algorithm which uses (a) domain-specific knowledge to help create an
initial population of knowledge-based neural networks and (b) genetic operators
of crossover and mutation (specifically designed for knowledge-based networks)
to continually search for better network topologies. Experiments on three
real-world domains indicate that our new algorithm is able to significantly
increase generalization compared to a standard connectionist theory-refinement
system, as well as our previous algorithm for growing knowledge-based networks.
| [
{
"version": "v1",
"created": "Thu, 1 May 1997 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Opitz",
"D. W.",
""
],
[
"Shavlik",
"J. W.",
""
]
] |
cs/9706101 | null | M. E. Pollack, D. Joslin, M. Paolucci | Flaw Selection Strategies for Partial-Order Planning | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 6, (1997),
223-262 | null | null | cs.AI | null | Several recent studies have compared the relative efficiency of alternative
flaw selection strategies for partial-order causal link (POCL) planning. We
review this literature, and present new experimental results that generalize
the earlier work and explain some of the discrepancies in it. In particular, we
describe the Least-Cost Flaw Repair (LCFR) strategy developed and analyzed by
Joslin and Pollack (1994), and compare it with other strategies, including
Gerevini and Schubert's (1996) ZLIFO strategy. LCFR and ZLIFO make very
different, and apparently conflicting claims about the most effective way to
reduce search-space size in POCL planning. We resolve this conflict, arguing
that much of the benefit that Gerevini and Schubert ascribe to the LIFO
component of their ZLIFO strategy is better attributed to other causes. We show
that for many problems, a strategy that combines least-cost flaw selection with
the delay of separable threats will be effective in reducing search-space size,
and will do so without excessive computational overhead. Although such a
strategy thus provides a good default, we also show that certain domain
characteristics may reduce its effectiveness.
| [
{
"version": "v1",
"created": "Sun, 1 Jun 1997 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Pollack",
"M. E.",
""
],
[
"Joslin",
"D.",
""
],
[
"Paolucci",
"M.",
""
]
] |
cs/9706102 | null | P. Jonsson, T. Drakengren | A Complete Classification of Tractability in RCC-5 | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 6, (1997),
211-221 | null | null | cs.AI | null | We investigate the computational properties of the spatial algebra RCC-5
which is a restricted version of the RCC framework for spatial reasoning. The
satisfiability problem for RCC-5 is known to be NP-complete but not much is
known about its approximately four billion subclasses. We provide a complete
classification of satisfiability for all these subclasses into polynomial and
NP-complete respectively. In the process, we identify all maximal tractable
subalgebras which are four in total.
| [
{
"version": "v1",
"created": "Sun, 1 Jun 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Jonsson",
"P.",
""
],
[
"Drakengren",
"T.",
""
]
] |
cs/9707101 | null | D. L. Mammen, T. Hogg | A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search
Difficulty | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997), 47-66 | null | null | cs.AI | null | The easy-hard-easy pattern in the difficulty of combinatorial search problems
as constraints are added has been explained as due to a competition between the
decrease in number of solutions and increased pruning. We test the generality
of this explanation by examining one of its predictions: if the number of
solutions is held fixed by the choice of problems, then increased pruning
should lead to a monotonic decrease in search cost. Instead, we find the
easy-hard-easy pattern in median search cost even when the number of solutions
is held constant, for some search methods. This generalizes previous
observations of this pattern and shows that the existing theory does not
explain the full range of the peak in search cost. In these cases the pattern
appears to be due to changes in the size of the minimal unsolvable subproblems,
rather than changing numbers of solutions.
| [
{
"version": "v1",
"created": "Tue, 1 Jul 1997 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Mammen",
"D. L.",
""
],
[
"Hogg",
"T.",
""
]
] |
cs/9707102 | null | T. Drakengren, P. Jonsson | Eight Maximal Tractable Subclasses of Allen's Algebra with Metric Time | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 7, (1997), 25-45 | null | null | cs.AI | null | This paper combines two important directions of research in temporal
resoning: that of finding maximal tractable subclasses of Allen's interval
algebra, and that of reasoning with metric temporal information. Eight new
maximal tractable subclasses of Allen's interval algebra are presented, some of
them subsuming previously reported tractable algebras. The algebras allow for
metric temporal constraints on interval starting or ending points, using the
recent framework of Horn DLRs. Two of the algebras can express the notion of
sequentiality between intervals, being the first such algebras admitting both
qualitative and metric time.
| [
{
"version": "v1",
"created": "Tue, 1 Jul 1997 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Drakengren",
"T.",
""
],
[
"Jonsson",
"P.",
""
]
] |
cs/9707103 | null | J. Y. Halpern | Defining Relative Likelihood in Partially-Ordered Preferential
Structures | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997), 1-24 | null | null | cs.AI | null | Starting with a likelihood or preference order on worlds, we extend it to a
likelihood ordering on sets of worlds in a natural way, and examine the
resulting logic. Lewis earlier considered such a notion of relative likelihood
in the context of studying counterfactuals, but he assumed a total preference
order on worlds. Complications arise when examining partial orders that are not
present for total orders. There are subtleties involving the exact approach to
lifting the order on worlds to an order on sets of worlds. In addition, the
axiomatization of the logic of relative likelihood in the case of partial
orders gives insight into the connection between relative likelihood and
default reasoning.
| [
{
"version": "v1",
"created": "Tue, 1 Jul 1997 00:00:00 GMT"
}
] | 1,472,601,600,000 | [
[
"Halpern",
"J. Y.",
""
]
] |
cs/9709101 | null | M. Tambe | Towards Flexible Teamwork | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 7, (1997), 83-124 | null | null | cs.AI | null | Many AI researchers are today striving to build agent teams for complex,
dynamic multi-agent domains, with intended applications in arenas such as
education, training, entertainment, information integration, and collective
robotics. Unfortunately, uncertainties in these complex, dynamic domains
obstruct coherent teamwork. In particular, team members often encounter
differing, incomplete, and possibly inconsistent views of their environment.
Furthermore, team members can unexpectedly fail in fulfilling responsibilities
or discover unexpected opportunities. Highly flexible coordination and
communication is key in addressing such uncertainties. Simply fitting
individual agents with precomputed coordination plans will not do, for their
inflexibility can cause severe failures in teamwork, and their
domain-specificity hinders reusability. Our central hypothesis is that the key
to such flexibility and reusability is providing agents with general models of
teamwork. Agents exploit such models to autonomously reason about coordination
and communication, providing requisite flexibility. Furthermore, the models
enable reuse across domains, both saving implementation effort and enforcing
consistency. This article presents one general, implemented model of teamwork,
called STEAM. The basic building block of teamwork in STEAM is joint intentions
(Cohen & Levesque, 1991b); teamwork in STEAM is based on agents' building up a
(partial) hierarchy of joint intentions (this hierarchy is seen to parallel
Grosz & Kraus's partial SharedPlans, 1996). Furthermore, in STEAM, team members
monitor the team's and individual members' performance, reorganizing the team
as necessary. Finally, decision-theoretic communication selectivity in STEAM
ensures reduction in communication overheads of teamwork, with appropriate
sensitivity to the environmental conditions. This article describes STEAM's
application in three different complex domains, and presents detailed empirical
results.
| [
{
"version": "v1",
"created": "Mon, 1 Sep 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Tambe",
"M.",
""
]
] |
cs/9709102 | null | C. G. Nevill-Manning, I. H. Witten | Identifying Hierarchical Structure in Sequences: A linear-time algorithm | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 7, (1997), 67-82 | null | null | cs.AI | null | SEQUITUR is an algorithm that infers a hierarchical structure from a sequence
of discrete symbols by replacing repeated phrases with a grammatical rule that
generates the phrase, and continuing this process recursively. The result is a
hierarchical representation of the original sequence, which offers insights
into its lexical structure. The algorithm is driven by two constraints that
reduce the size of the grammar, and produce structure as a by-product. SEQUITUR
breaks new ground by operating incrementally. Moreover, the method's simple
structure permits a proof that it operates in space and time that is linear in
the size of the input. Our implementation can process 50,000 symbols per second
and has been applied to an extensive range of real world sequences.
| [
{
"version": "v1",
"created": "Mon, 1 Sep 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Nevill-Manning",
"C. G.",
""
],
[
"Witten",
"I. H.",
""
]
] |
cs/9711102 | null | L. H. Ihrig, S. Kambhampati | Storing and Indexing Plan Derivations through Explanation-based Analysis
of Retrieval Failures | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997),
161-198 | null | null | cs.AI | null | Case-Based Planning (CBP) provides a way of scaling up domain-independent
planning to solve large problems in complex domains. It replaces the detailed
and lengthy search for a solution with the retrieval and adaptation of previous
planning experiences. In general, CBP has been demonstrated to improve
performance over generative (from-scratch) planning. However, the performance
improvements it provides are dependent on adequate judgements as to problem
similarity. In particular, although CBP may substantially reduce planning
effort overall, it is subject to a mis-retrieval problem. The success of CBP
depends on these retrieval errors being relatively rare. This paper describes
the design and implementation of a replay framework for the case-based planner
DERSNLP+EBL. DERSNLP+EBL extends current CBP methodology by incorporating
explanation-based learning techniques that allow it to explain and learn from
the retrieval failures it encounters. These techniques are used to refine
judgements about case similarity in response to feedback when a wrong decision
has been made. The same failure analysis is used in building the case library,
through the addition of repairing cases. Large problems are split and stored as
single goal subproblems. Multi-goal problems are stored only when these smaller
cases fail to be merged into a full solution. An empirical evaluation of this
approach demonstrates the advantage of learning from experienced retrieval
failure.
| [
{
"version": "v1",
"created": "Sat, 1 Nov 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Ihrig",
"L. H.",
""
],
[
"Kambhampati",
"S.",
""
]
] |
cs/9711103 | null | N. L. Zhang, W. Liu | A Model Approximation Scheme for Planning in Partially Observable
Stochastic Domains | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997),
199-230 | null | null | cs.AI | null | Partially observable Markov decision processes (POMDPs) are a natural model
for planning problems where effects of actions are nondeterministic and the
state of the world is not completely observable. It is difficult to solve
POMDPs exactly. This paper proposes a new approximation scheme. The basic idea
is to transform a POMDP into another one where additional information is
provided by an oracle. The oracle informs the planning agent that the current
state of the world is in a certain region. The transformed POMDP is
consequently said to be region observable. It is easier to solve than the
original POMDP. We propose to solve the transformed POMDP and use its optimal
policy to construct an approximate policy for the original POMDP. By
controlling the amount of additional information that the oracle provides, it
is possible to find a proper tradeoff between computational time and
approximation quality. In terms of algorithmic contributions, we study in
details how to exploit region observability in solving the transformed POMDP.
To facilitate the study, we also propose a new exact algorithm for general
POMDPs. The algorithm is conceptually simple and yet is significantly more
efficient than all previous exact algorithms.
| [
{
"version": "v1",
"created": "Sat, 1 Nov 1997 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Zhang",
"N. L.",
""
],
[
"Liu",
"W.",
""
]
] |
cs/9711104 | null | D. Monderer, M. Tennenholtz | Dynamic Non-Bayesian Decision Making | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997),
231-248 | null | null | cs.AI | null | The model of a non-Bayesian agent who faces a repeated game with incomplete
information against Nature is an appropriate tool for modeling general
agent-environment interactions. In such a model the environment state
(controlled by Nature) may change arbitrarily, and the feedback/reward function
is initially unknown. The agent is not Bayesian, that is he does not form a
prior probability neither on the state selection strategy of Nature, nor on his
reward function. A policy for the agent is a function which assigns an action
to every history of observations and actions. Two basic feedback structures are
considered. In one of them -- the perfect monitoring case -- the agent is able
to observe the previous environment state as part of his feedback, while in the
other -- the imperfect monitoring case -- all that is available to the agent is
the reward obtained. Both of these settings refer to partially observable
processes, where the current environment state is unknown. Our main result
refers to the competitive ratio criterion in the perfect monitoring case. We
prove the existence of an efficient stochastic policy that ensures that the
competitive ratio is obtained at almost all stages with an arbitrarily high
probability, where efficiency is measured in terms of rate of convergence. It
is further shown that such an optimal policy does not exist in the imperfect
monitoring case. Moreover, it is proved that in the perfect monitoring case
there does not exist a deterministic policy that satisfies our long run
optimality criterion. In addition, we discuss the maxmin criterion and prove
that a deterministic efficient optimal strategy does exist in the imperfect
monitoring case under this criterion. Finally we show that our approach to
long-run optimality can be viewed as qualitative, which distinguishes it from
previous work in this area.
| [
{
"version": "v1",
"created": "Sat, 1 Nov 1997 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Monderer",
"D.",
""
],
[
"Tennenholtz",
"M.",
""
]
] |
cs/9712101 | null | J. Frank, P. Cheeseman, J. Stutz | When Gravity Fails: Local Search Topology | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997),
249-281 | null | null | cs.AI | null | Local search algorithms for combinatorial search problems frequently
encounter a sequence of states in which it is impossible to improve the value
of the objective function; moves through these regions, called plateau moves,
dominate the time spent in local search. We analyze and characterize plateaus
for three different classes of randomly generated Boolean Satisfiability
problems. We identify several interesting features of plateaus that impact the
performance of local search algorithms. We show that local minima tend to be
small but occasionally may be very large. We also show that local minima can be
escaped without unsatisfying a large number of clauses, but that systematically
searching for an escape route may be computationally expensive if the local
minimum is large. We show that plateaus with exits, called benches, tend to be
much larger than minima, and that some benches have very few exit states which
local search can use to escape. We show that the solutions (i.e., global
minima) of randomly generated problem instances form clusters, which behave
similarly to local minima. We revisit several enhancements of local search
algorithms and explain their performance in light of our results. Finally we
discuss strategies for creating the next generation of local search algorithms.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 1997 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Frank",
"J.",
""
],
[
"Cheeseman",
"P.",
""
],
[
"Stutz",
"J.",
""
]
] |
cs/9712102 | null | H. Kaindl, G. Kainz | Bidirectional Heuristic Search Reconsidered | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 7, (1997),
283-317 | null | null | cs.AI | null | The assessment of bidirectional heuristic search has been incorrect since it
was first published more than a quarter of a century ago. For quite a long
time, this search strategy did not achieve the expected results, and there was
a major misunderstanding about the reasons behind it. Although there is still
wide-spread belief that bidirectional heuristic search is afflicted by the
problem of search frontiers passing each other, we demonstrate that this
conjecture is wrong. Based on this finding, we present both a new generic
approach to bidirectional heuristic search and a new approach to dynamically
improving heuristic values that is feasible in bidirectional search only. These
approaches are put into perspective with both the traditional and more recently
proposed approaches in order to facilitate a better overall understanding.
Empirical results of experiments with our new approaches show that
bidirectional heuristic search can be performed very efficiently and also with
limited memory. These results suggest that bidirectional heuristic search
appears to be better for solving certain difficult problems than corresponding
unidirectional search. This provides some evidence for the usefulness of a
search strategy that was long neglected. In summary, we show that bidirectional
heuristic search is viable and consequently propose that it be reconsidered.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 1997 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Kaindl",
"H.",
""
],
[
"Kainz",
"G.",
""
]
] |
cs/9801101 | null | G. Gogic, C. H. Papadimitriou, M. Sideri | Incremental Recompilation of Knowledge | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 23-37 | null | null | cs.AI | null | Approximating a general formula from above and below by Horn formulas (its
Horn envelope and Horn core, respectively) was proposed by Selman and Kautz
(1991, 1996) as a form of ``knowledge compilation,'' supporting rapid
approximate reasoning; on the negative side, this scheme is static in that it
supports no updates, and has certain complexity drawbacks pointed out by
Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many
frameworks and schemes proposed in the literature for theory update and
revision are plagued by serious complexity-theoretic impediments, even in the
Horn case, as was pointed out by Eiter and Gottlob (1992), and is further
demonstrated in the present paper. More fundamentally, these schemes are not
inductive, in that they may lose in a single update any positive properties of
the represented sets of formulas (small size, Horn structure, etc.). In this
paper we propose a new scheme, incremental recompilation, which combines Horn
approximation and model-based updates; this scheme is inductive and very
efficient, free of the problems facing its constituents. A set of formulas is
represented by an upper and lower Horn approximation. To update, we replace the
upper Horn formula by the Horn envelope of its minimum-change update, and
similarly the lower one by the Horn core of its update; the key fact which
enables this scheme is that Horn envelopes and cores are easy to compute when
the underlying formula is the result of a minimum-change update of a Horn
formula by a clause. We conjecture that efficient algorithms are possible for
more complex updates.
| [
{
"version": "v1",
"created": "Thu, 1 Jan 1998 00:00:00 GMT"
}
] | 1,179,878,400,000 | [
[
"Gogic",
"G.",
""
],
[
"Papadimitriou",
"C. H.",
""
],
[
"Sideri",
"M.",
""
]
] |
cs/9801102 | null | J. Engelfriet | Monotonicity and Persistence in Preferential Logics | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 1-21 | null | null | cs.AI | null | An important characteristic of many logics for Artificial Intelligence is
their nonmonotonicity. This means that adding a formula to the premises can
invalidate some of the consequences. There may, however, exist formulae that
can always be safely added to the premises without destroying any of the
consequences: we say they respect monotonicity. Also, there may be formulae
that, when they are a consequence, can not be invalidated when adding any
formula to the premises: we call them conservative. We study these two classes
of formulae for preferential logics, and show that they are closely linked to
the formulae whose truth-value is preserved along the (preferential) ordering.
We will consider some preferential logics for illustration, and prove syntactic
characterization results for them. The results in this paper may improve the
efficiency of theorem provers for preferential logics.
| [
{
"version": "v1",
"created": "Thu, 1 Jan 1998 00:00:00 GMT"
}
] | 1,179,878,400,000 | [
[
"Engelfriet",
"J.",
""
]
] |
cs/9803101 | null | B. Srivastava, S. Kambhampati | Synthesizing Customized Planners from Specifications | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 93-128 | null | null | cs.AI | null | Existing plan synthesis approaches in artificial intelligence fall into two
categories -- domain independent and domain dependent. The domain independent
approaches are applicable across a variety of domains, but may not be very
efficient in any one given domain. The domain dependent approaches need to be
(re)designed for each domain separately, but can be very efficient in the
domain for which they are designed. One enticing alternative to these
approaches is to automatically synthesize domain independent planners given the
knowledge about the domain and the theory of planning. In this paper, we
investigate the feasibility of using existing automated software synthesis
tools to support such synthesis. Specifically, we describe an architecture
called CLAY in which the Kestrel Interactive Development System (KIDS) is used
to derive a domain-customized planner through a semi-automatic combination of a
declarative theory of planning, and the declarative control knowledge specific
to a given domain, to semi-automatically combine them to derive
domain-customized planners. We discuss what it means to write a declarative
theory of planning and control knowledge for KIDS, and illustrate our approach
by generating a class of domain-specific planners using state space
refinements. Our experiments show that the synthesized planners can outperform
classical refinement planners (implemented as instantiations of UCP,
Kambhampati & Srivastava, 1995), using the same control knowledge. We will
contrast the costs and benefits of the synthesis approach with conventional
methods for customizing domain independent planners.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 1998 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Srivastava",
"B.",
""
],
[
"Kambhampati",
"S.",
""
]
] |
cs/9803102 | null | A. Moore, M. S. Lee | Cached Sufficient Statistics for Efficient Machine Learning with Large
Datasets | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 67-91 | null | null | cs.AI | null | This paper introduces new algorithms and data structures for quick counting
for machine learning datasets. We focus on the counting task of constructing
contingency tables, but our approach is also applicable to counting the number
of records in a dataset that match conjunctive queries. Subject to certain
assumptions, the costs of these operations can be shown to be independent of
the number of records in the dataset and loglinear in the number of non-zero
entries in the contingency table. We provide a very sparse data structure, the
ADtree, to minimize memory use. We provide analytical worst-case bounds for
this structure for several models of data distribution. We empirically
demonstrate that tractably-sized data structures can be produced for large
real-world datasets by (a) using a sparse tree structure that never allocates
memory for counts of zero, (b) never allocating memory for counts that can be
deduced from other counts, and (c) not bothering to expand the tree fully near
its leaves. We show how the ADtree can be used to accelerate Bayes net
structure finding algorithms, rule learning algorithms, and feature selection
algorithms, and we provide a number of empirical results comparing ADtree
methods against traditional direct counting approaches. We also discuss the
possible uses of ADtrees in other machine learning methods, and discuss the
merits of ADtrees in comparison with alternative representations such as
kd-trees, R-trees and Frequent Sets.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 1998 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Moore",
"A.",
""
],
[
"Lee",
"M. S.",
""
]
] |
cs/9803103 | null | S. Argamon-Engelson, M. Koppel | Tractability of Theory Patching | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 39-65 | null | null | cs.AI | null | In this paper we consider the problem of `theory patching', in which we are
given a domain theory, some of whose components are indicated to be possibly
flawed, and a set of labeled training examples for the domain concept. The
theory patching problem is to revise only the indicated components of the
theory, such that the resulting theory correctly classifies all the training
examples. Theory patching is thus a type of theory revision in which revisions
are made to individual components of the theory. Our concern in this paper is
to determine for which classes of logical domain theories the theory patching
problem is tractable. We consider both propositional and first-order domain
theories, and show that the theory patching problem is equivalent to that of
determining what information contained in a theory is `stable' regardless of
what revisions might be performed to the theory. We show that determining
stability is tractable if the input theory satisfies two conditions: that
revisions to each theory component have monotonic effects on the classification
of examples, and that theory components act independently in the classification
of examples in the theory. We also show how the concepts introduced can be used
to determine the soundness and completeness of particular theory patching
algorithms.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 1998 00:00:00 GMT"
}
] | 1,179,878,400,000 | [
[
"Argamon-Engelson",
"S.",
""
],
[
"Koppel",
"M.",
""
]
] |
cs/9805101 | null | J. F\"urnkranz | Integrative Windowing | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998),
129-164 | 10.1613/jair.487 | null | cs.AI | null | In this paper we re-investigate windowing for rule learning algorithms. We
show that, contrary to previous results for decision tree learning, windowing
can in fact achieve significant run-time gains in noise-free domains and
explain the different behavior of rule learning algorithms by the fact that
they learn each rule independently. The main contribution of this paper is
integrative windowing, a new type of algorithm that further exploits this
property by integrating good rules into the final theory right after they have
been discovered. Thus it avoids re-learning these rules in subsequent
iterations of the windowing process. Experimental evidence in a variety of
noise-free domains shows that integrative windowing can in fact achieve
substantial run-time gains. Furthermore, we discuss the problem of noise in
windowing and present an algorithm that is able to achieve run-time gains in a
set of experiments in a simple domain with artificial noise.
| [
{
"version": "v1",
"created": "Fri, 1 May 1998 00:00:00 GMT"
}
] | 1,544,400,000,000 | [
[
"Fürnkranz",
"J.",
""
]
] |
cs/9806101 | null | A. Darwiche | Model-Based Diagnosis using Structured System Descriptions | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998),
165-222 | null | null | cs.AI | null | This paper presents a comprehensive approach for model-based diagnosis which
includes proposals for characterizing and computing preferred diagnoses,
assuming that the system description is augmented with a system structure (a
directed graph explicating the interconnections between system components).
Specifically, we first introduce the notion of a consequence, which is a
syntactically unconstrained propositional sentence that characterizes all
consistency-based diagnoses and show that standard characterizations of
diagnoses, such as minimal conflicts, correspond to syntactic variations on a
consequence. Second, we propose a new syntactic variation on the consequence
known as negation normal form (NNF) and discuss its merits compared to standard
variations. Third, we introduce a basic algorithm for computing consequences in
NNF given a structured system description. We show that if the system structure
does not contain cycles, then there is always a linear-size consequence in NNF
which can be computed in linear time. For arbitrary system structures, we show
a precise connection between the complexity of computing consequences and the
topology of the underlying system structure. Finally, we present an algorithm
that enumerates the preferred diagnoses characterized by a consequence. The
algorithm is shown to take linear time in the size of the consequence if the
preference criterion satisfies some general conditions.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 1998 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Darwiche",
"A.",
""
]
] |
cs/9806102 | null | L. Finkelstein, S. Markovitch | A Selective Macro-learning Algorithm and its Application to the NxN
Sliding-Tile Puzzle | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 8, (1998),
223-263 | null | null | cs.AI | null | One of the most common mechanisms used for speeding up problem solvers is
macro-learning. Macros are sequences of basic operators acquired during problem
solving. Macros are used by the problem solver as if they were basic operators.
The major problem that macro-learning presents is the vast number of macros
that are available for acquisition. Macros increase the branching factor of the
search space and can severely degrade problem-solving efficiency. To make macro
learning useful, a program must be selective in acquiring and utilizing macros.
This paper describes a general method for selective acquisition of macros.
Solvable training problems are generated in increasing order of difficulty. The
only macros acquired are those that take the problem solver out of a local
minimum to a better state. The utility of the method is demonstrated in several
domains, including the domain of NxN sliding-tile puzzles. After learning on
small puzzles, the system is able to efficiently solve puzzles of any size.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 1998 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Finkelstein",
"L.",
""
],
[
"Markovitch",
"S.",
""
]
] |
cs/9808101 | null | M. L. Littman, J. Goldsmith, M. Mundhenk | The Computational Complexity of Probabilistic Planning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 9, (1998), 1-36 | null | null | cs.AI | null | We examine the computational complexity of testing and finding small plans in
probabilistic planning domains with both flat and propositional
representations. The complexity of plan evaluation and existence varies with
the plan type sought; we examine totally ordered plans, acyclic plans, and
looping plans, and partially ordered plans under three natural definitions of
plan value. We show that problems of interest are complete for a variety of
complexity classes: PL, P, NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. In the
process of proving that certain planning problems are complete for NP^PP, we
introduce a new basic NP^PP-complete problem, E-MAJSAT, which generalizes the
standard Boolean satisfiability problem to computations involving probabilistic
quantities; our results suggest that the development of good heuristics for
E-MAJSAT could be important for the creation of efficient algorithms for a wide
variety of problems.
| [
{
"version": "v1",
"created": "Sat, 1 Aug 1998 00:00:00 GMT"
}
] | 1,179,878,400,000 | [
[
"Littman",
"M. L.",
""
],
[
"Goldsmith",
"J.",
""
],
[
"Mundhenk",
"M.",
""
]
] |
cs/9810016 | Ion Muslea | Ion Muslea | SYNERGY: A Linear Planner Based on Genetic Programming | 13 pages, European Conference on Planning 1997 | "Recent Advances in AI Planning" (Sam Steel & Rachid Alami eds.),
p. 312-325, Springer 1997 (LNAI 1348) | null | null | cs.AI | null | In this paper we describe SYNERGY, which is a highly parallelizable, linear
planning system that is based on the genetic programming paradigm. Rather than
reasoning about the world it is planning for, SYNERGY uses artificial
selection, recombination and fitness measure to generate linear plans that
solve conjunctive goals. We ran SYNERGY on several domains (e.g., the briefcase
problem and a few variants of the robot navigation problem), and the
experimental results show that our planner is capable of handling problem
instances that are one to two orders of magnitude larger than the ones solved
by UCPOP. In order to facilitate the search reduction and to enhance the
expressive power of SYNERGY, we also propose two major extensions to our
planning system: a formalism for using hierarchical planning operators, and a
framework for planning in dynamic environments.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 1998 22:11:35 GMT"
}
] | 1,179,878,400,000 | [
[
"Muslea",
"Ion",
""
]
] |
cs/9811024 | Krzysztof R. Apt | Krzysztof R. Apt | The Essence of Constraint Propagation | To appear in Theoretical Computer Science in the special issue
devoted to the 24th ICALP conference (Bologna 1997) | null | null | null | cs.AI | null | We show that several constraint propagation algorithms (also called (local)
consistency, consistency enforcing, Waltz, filtering or narrowing algorithms)
are instances of algorithms that deal with chaotic iteration. To this end we
propose a simple abstract framework that allows us to classify and compare
these algorithms and to establish in a uniform way their basic properties.
| [
{
"version": "v1",
"created": "Fri, 13 Nov 1998 13:04:02 GMT"
}
] | 1,179,878,400,000 | [
[
"Apt",
"Krzysztof R.",
""
]
] |
cs/9812010 | Erik T. Mueller | Erik T. Mueller, Michael G. Dyer | Towards a computational theory of human daydreaming | 10 pages. Appears in: Proceedings of the Seventh Annual Conference of
the Cognitive Science Society (pp. 120-129). Irvine, CA. 1985 | null | null | null | cs.AI | null | This paper examines the phenomenon of daydreaming: spontaneously recalling or
imagining personal or vicarious experiences in the past or future. The
following important roles of daydreaming in human cognition are postulated:
plan preparation and rehearsal, learning from failures and successes, support
for processes of creativity, emotion regulation, and motivation.
A computational theory of daydreaming and its implementation as the program
DAYDREAMER are presented. DAYDREAMER consists of 1) a scenario generator based
on relaxed planning, 2) a dynamic episodic memory of experiences used by the
scenario generator, 3) a collection of personal goals and control goals which
guide the scenario generator, 4) an emotion component in which daydreams
initiate, and are initiated by, emotional states arising from goal outcomes,
and 5) domain knowledge of interpersonal relations and common everyday
occurrences.
The role of emotions and control goals in daydreaming is discussed. Four
control goals commonly used in guiding daydreaming are presented:
rationalization, failure/success reversal, revenge, and preparation. The role
of episodic memory in daydreaming is considered, including how daydreamed
information is incorporated into memory and later used. An initial version of
DAYDREAMER which produces several daydreams (in English) is currently running.
| [
{
"version": "v1",
"created": "Thu, 10 Dec 1998 16:29:07 GMT"
}
] | 1,179,878,400,000 | [
[
"Mueller",
"Erik T.",
""
],
[
"Dyer",
"Michael G.",
""
]
] |
cs/9812017 | Wolfgang Slany | Andreas Raggl, Wolfgang Slany | A reusable iterative optimization software library to solve
combinatorial problems with approximate reasoning | 33 pages, 9 figures; for a project overview see
http://www.dbai.tuwien.ac.at/proj/StarFLIP/ | International Journal of Approximate Reasoning, 19(1--2):161--191,
July/August 1998 | null | DBAI-TR-98-23 | cs.AI | null | Real world combinatorial optimization problems such as scheduling are
typically too complex to solve with exact methods. Additionally, the problems
often have to observe vaguely specified constraints of different importance,
the available data may be uncertain, and compromises between antagonistic
criteria may be necessary. We present a combination of approximate reasoning
based constraints and iterative optimization based heuristics that help to
model and solve such problems in a framework of C++ software libraries called
StarFLIP++. While initially developed to schedule continuous caster units in
steel plants, we present in this paper results from reusing the library
components in a shift scheduling system for the workforce of an industrial
production plant.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 1998 21:45:15 GMT"
}
] | 1,179,878,400,000 | [
[
"Raggl",
"Andreas",
""
],
[
"Slany",
"Wolfgang",
""
]
] |
cs/9903016 | Journal of Artificial Intelligence Research | N Friedman, J.Y. Halpern | Modeling Belief in Dynamic Systems, Part II: Revision and Update | See http://www.jair.org/ for other files accompanying this article | Journal of Artificial Intelligence Research, Vol.10 (1999) 117-167 | null | null | cs.AI | null | The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. In a companion paper (Friedman & Halpern,
1997), we introduce a new framework to model belief change. This framework
combines temporal and epistemic modalities with a notion of plausibility,
allowing us to examine the change of beliefs over time. In this paper, we show
how belief revision and belief update can be captured in our framework. This
allows us to compare the assumptions made by each method, and to better
understand the principles underlying them. In particular, it shows that Katsuno
and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on
several strong assumptions that may limit its applicability in artificial
intelligence. Finally, our analysis allow us to identify a notion of minimal
change that underlies a broad range of belief change operations including
revision and update.
| [
{
"version": "v1",
"created": "Wed, 24 Mar 1999 00:22:01 GMT"
}
] | 1,179,878,400,000 | [
[
"Friedman",
"N",
""
],
[
"Halpern",
"J. Y.",
""
]
] |
cs/9906002 | Stevan Harnad | Stevan Harnad | The Symbol Grounding Problem | null | Physica D 42: 335-346 | 10.1016/0167-2789(90)90087-6 | null | cs.AI | null | How can the semantic interpretation of a formal symbol system be made
intrinsic to the system, rather than just parasitic on the meanings in our
heads? How can the meanings of the meaningless symbol tokens, manipulated
solely on the basis of their (arbitrary) shapes, be grounded in anything but
other meaningless symbols? The problem is analogous to trying to learn Chinese
from a Chinese/Chinese dictionary alone. A candidate solution is sketched:
Symbolic representations must be grounded bottom-up in nonsymbolic
representations of two kinds: (1) "iconic representations," which are analogs
of the proximal sensory projections of distal objects and events, and (2)
"categorical representations," which are learned and innate feature-detectors
that pick out the invariant features of object and event categories from their
sensory projections. Elementary symbols are the names of these object and event
categories, assigned on the basis of their (nonsymbolic) categorical
representations. Higher-order (3) "symbolic representations," grounded in these
elementary symbols, consist of symbol strings describing category membership
relations (e.g., "An X is a Y that is Z").
| [
{
"version": "v1",
"created": "Tue, 1 Jun 1999 19:57:24 GMT"
}
] | 1,435,190,400,000 | [
[
"Harnad",
"Stevan",
""
]
] |
cs/9909003 | Rabindra Narayan Behera | S. Mohanty (1) and R.N. Behera (2) ((1) Department of Computer Science
and Application Utkal University, Bhubaneswar, India, (2) National
Informatics Centre, Puri, India) | Iterative Deepening Branch and Bound | 39 html pages + 4 gif files (fig1,fig1(a),fig2,fig3) | null | null | null | cs.AI | null | In tree search problem the best-first search algorithm needs too much of
space . To remove such drawbacks of these algorithms the IDA* was developed
which is both space and time cost efficient. But again IDA* can give an optimal
solution for real valued problems like Flow shop scheduling, Travelling
Salesman and 0/1 Knapsack due to their real valued cost estimates. Thus further
modifications are done on it and the Iterative Deepening Branch and Bound
Search Algorithms is developed which meets the requirements. We have tried
using this algorithm for the Flow Shop Scheduling Problem and have found that
it is quite effective.
| [
{
"version": "v1",
"created": "Fri, 3 Sep 1999 10:31:46 GMT"
}
] | 1,179,878,400,000 | [
[
"Mohanty",
"S.",
""
],
[
"Behera",
"R. N.",
""
]
] |
cs/9910016 | Juergen Dix | Juergen Dix, Mirco Nanni, VS Subrahmanian | Probabilistic Agent Programs | 44 pages, 1 figure, Appendix | null | null | null | cs.AI | null | Agents are small programs that autonomously take actions based on changes in
their environment or ``state.'' Over the last few years, there have been an
increasing number of efforts to build agents that can interact and/or
collaborate with other agents. In one of these efforts, Eiter, Subrahmanian amd
Pick (AIJ, 108(1-2), pages 179-255) have shown how agents may be built on top
of legacy code. However, their framework assumes that agent states are
completely determined, and there is no uncertainty in an agent's state. Thus,
their framework allows an agent developer to specify how his agents will react
when the agent is 100% sure about what is true/false in the world state. In
this paper, we propose the concept of a \emph{probabilistic agent program} and
show how, given an arbitrary program written in any imperative language, we may
build a declarative ``probabilistic'' agent program on top of it which supports
decision making in the presence of uncertainty. We provide two alternative
semantics for probabilistic agent programs. We show that the second semantics,
though more epistemically appealing, is more complex to compute. We provide
sound and complete algorithms to compute the semantics of \emph{positive} agent
programs.
| [
{
"version": "v1",
"created": "Thu, 21 Oct 1999 09:35:38 GMT"
}
] | 1,179,878,400,000 | [
[
"Dix",
"Juergen",
""
],
[
"Nanni",
"Mirco",
""
],
[
"Subrahmanian",
"VS",
""
]
] |
cs/9911012 | Joseph Y. Halpern | Joseph Y. Halpern | Cox's Theorem Revisited | Changed running head from original submission | Journal of AI Research, vol. 11, 1999, pp. 429-435 | null | null | cs.AI | null | The assumptions needed to prove Cox's Theorem are discussed and examined.
Various sets of assumptions under which a Cox-style theorem can be proved are
provided, although all are rather strong and, arguably, not natural.
| [
{
"version": "v1",
"created": "Sat, 27 Nov 1999 17:57:17 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Dec 1999 22:29:49 GMT"
}
] | 1,179,878,400,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |