Date: Tue, 14 Jan 1997 23:29:55 GMT
Server: NCSA/1.5.1
Last-modified: Thu, 07 Mar 1996 16:04:50 GMT
Content-type: text/html
Content-length: 2808

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML>
<HEAD>
<title>Bill Rounds'  Research Statement </title>
</HEAD><BODY>

<H2>Mathematics of Language </H2>

  This subject goes back a long way, to early logicians who (like Boole)
were convinced that a calculus of reasoning was just around the corner;
all that was required was to translate logical arguments in natural language
into symbolic expressions. 
<P>
This problem was a lot more difficult than imagined. It involved first
understanding how meanings were associated with utterances; and to do this,
the structure of utterances had to be understood. 
<P>
The foundations  of today's mathematics of language were laid by
(among others) Noam  Chomsky in the late 1950's and early 1960's. 
People recognized then that simple syntactic systems, notably 
context-free grammars, could be used to specify both programming and 
natural language syntax. 
<P>
Since those days, it has been discovered that the syntax of natural language
and programming languages is a bit more complicated, and people have worked
on various ways to integrate syntax and semantics. The basic cognitive premise,
however, is that humans do routinely use certain data structures to process
language, and that (quite possibly) humans are genetically predisposed to
be able to use these data structures for both language processing and learning.
<P>
Mathematics enters the picture when one wants to study the structural properties of
such postulated data structures, and algorithms which use them. For example,
context-free grammars make heavy use of tree data structures.
<P>
Most recently, I have been involved with a new class of data structures
called <I> feature structures</I> or attribute-value structures. These
entities occur ubiquitously in computer science (a.k.a. records), and
they have a lot of interesting properties, like tree structures do.
<P>
One such property is that feature structures may represent <I>incomplete</I>
information about a sentence. This leads to a natural ordering of
these structures according to how much information they contain. I am
now looking at ways to "fill up" feature structures; and a very interesting
approach is to fill them up with <I> default </I> information, which may
have to be retracted. This leads to using methods from artificial
intelligence (non-monotonic logic and belief revision) together with
the mathematics of partial orders. This latter theory
has been well-developed for programming languages, where it is called
 <!WA0><A HREF="ftp://theory.doc.ic.ac.uk/papers/Jung/handbook.ps.gz">domain theory</A>.
More on feature systems and defaults can be found in the specific project descriptions
<UL>
<LI><!WA1><A HREF="http://ai.eecs.umich.edu/people/rounds/feature.html">Feature Logic</A>
<LI><!WA2><A HREF="http://ai.eecs.umich.edu/people/rounds/nonmon.html">Default Domain Theory</A>
</UL>
</BODY>
</HTML>