MIME-Version: 1.0
Server: CERN/3.0
Date: Monday, 06-Jan-97 22:41:20 GMT
Content-Type: text/html
Content-Length: 6223
Last-Modified: Monday, 12-Aug-96 18:54:14 GMT

<title>Papers</title>

To view a paper, click on the open book image. <br> 
<br>

<h1>Uncertain Reasoning</h1>

<ol>

<! ===========================================================================>

<a name="banner-proposal-95.ps.Z"</a>

<b><li> Refinement of Bayesian Networks by Combining Connectionist and
Symbolic Techniques <br></b>
Sowmya Ramanchandran<br>

Ph.D. proposal, Department of Computer Sciences, University of Texas
at Austin, 1995. <p>

<blockquote>
Bayesian networks provide a mathematically sound formalism for
representing and reasoning with uncertain knowledge and are as such
widely used. However, acquiring and capturing knowledge in this
framework is difficult. There is a growing interest in formulating
techniques for learning Bayesian networks inductively. While the
problem of learning a Bayesian network, given complete data, has been
explored in some depth, the problem of learning networks with
unobserved causes is still open. In this proposal, we view this
problem from the perspective of theory revision and present a novel
approach which adapts techniques developed for revising theories in
symbolic and connectionist representations.  Thus, we assume that the
learner is given an initial approximate network (usually obtained from
a expert). Our technique inductively revises the network to fit the
data better.  Our proposed system has two components: one component
revises the parameters of a Bayesian network of known structure, and
the other component revises the structure of the network. The
component for parameter revision maps the given Bayesian network into
a multi-layer feedforward neural network, with the parameters mapped
to weights in the neural network, and uses standard backpropagation
techniques to learn the weights. The structure revision component uses
qualitative analysis to suggest revisions to the network when it fails
to predict the data accurately. The first component has been
implemented and we will present results from experiments on real world
classification problems which show our technique to be effective.  We
will also discuss our proposed structure revision algorithm, our plans
for experiments to evaluate the system, as well as some extensions to
the system.
</blockquote>

<a href="file://ftp.cs.utexas.edu/pub/mooney/papers/banner-proposal-95.ps.Z">
<img align=top src="paper.xbm"></a><p>

<! ===========================================================================>

<a name="banner-icnn-96.ps.Z"</a>


<b><li>Revising Bayesian Network Parameters Using Backpropagation<br></b>
Sowmya Ramachandran and Raymond J. Mooney<br>

To appear in the Proceedings of the International Conference on Neural
Networks (ICNN-96), Washington, D.C., June 1996, Special Session on
Knowledge-Based Artificial Neural Networks. <p>

<blockquote>
The problem of learning Bayesian networks with hidden variables is known to
be a hard problem. Even the simpler task of learning just the conditional
probabilities on a Bayesian network with hidden variables is hard. In this
paper, we present an approach that learns the conditional probabilities on
a Bayesian network with hidden variables by transforming it into a
multi-layer feedforward neural network (ANN). The conditional probabilities
are mapped onto weights in the ANN, which are then learned using standard
backpropagation techniques. To avoid the problem of exponentially large
ANNs, we focus on Bayesian networks with noisy-or and noisy-and
nodes. Experiments on real world classification problems demonstrate the
effectiveness of our technique.
</blockquote>

<a href="file://ftp.cs.utexas.edu/pub/mooney/papers/banner-icnn-96.ps.Z">
<img align=top src="paper.xbm"></a><p>

<h1>Qualitative Modeling & Diagnosis</h1>

<! ===========================================================================>

<a name="misq-rt-qr-94.ps.Z"</a>

<b> <li>Learning Qualitative Models for Systems with Multiple Operating Regions<br></b>

Sowmya Ramachandran, Raymond J. Mooney and Benjamin J. Kuipers <br>

<cite>Proceedings of the Eight International Workshop of Qualitative
Reasoning about Physical Systems</cite>, pp. 212-223, Nara, Japan,
June 1994. (QR-94)

<blockquote>
The problem of learning qualitative models of physical systems from
observations of its behaviour has been addressed by several
researchers in recent years. Most current techniques limit themselves
to learning a single qualitative differential equation to model the
entire system.  However, many systems have several qualitative
differential equations underlying them.  In this paper, we present an
approach to learning the models for such systems.  Our technique
divides the behaviours into segments, each of which can be explained
by a single qualitative differential equation.  The qualitative model
for each segment can be generated using any of the existing techniques
for learning a single model.  We show that results of applying our
technique to several examples and demonstrate that it is effective.
</blockquote>

<a href="file://ftp.cs.utexas.edu/pub/mooney/papers/misq-rt-qr-94.ps.Z"</a><p>
<img align=top src="paper.xbm"></a><p>

<! ===========================================================================>

<h1>Neural Networks</h1>

<b> <li>Information Measure Based Skeletonisation<br></b>

Sowmya Ramachandran and Lorien Y. Pratt<br>

<cite>Advances in Neural Information Processing Systems
Vol. 4.</cite>, pp. 1080-1087, Denver, Colorado 1992

<blockquote>
Automatic determination of proper neural network topology by trimming
over-sized networks is an important area of study, which has
previously been addressed using a variety of techniues. In this paper,
we present Information Based Skeletonisation (IMBS), a new approach to
this problem where superfluous hidden units are removed based on their
information measure (IM). This measure, borrowed from decision ttree
induction techniques, refelcts the degree to which the hyperplane
formed by a hidden unit discriminates between training data
classes. We show the results of applying IMBS to three classification
tasks and demonstrate that it removes a substantial number of hidden
units without significantly affecting network performance.
</blockquote>

<hr>






