Date: Wed, 20 Nov 1996 22:57:17 GMT
Server: NCSA/1.5
Content-type: text/html
Last-modified: Thu, 10 Nov 1994 20:37:22 GMT
Content-length: 2856

<HTML>
<HEAD>
<title>Leslie G. Valiant</title>
</HEAD>
<BODY>
<h1>Leslie Valiant</h1>
<!WA0><img align=top src=http://www.das.harvard.edu/users/faculty/Leslie_Valiant/Leslie_Valiant.gif>
<h3>Gordon McKay Professor of Computer Science and Applied Mathematics</h3>
<h3>THEORY OF COMPUTATION AND MACHINE LEARNING</h3>
<p>
Complexity theory, the study of the fundamental laws and limitations
that govern computations, is not only an inherently rich mathematical
subject but one whose conclusions and methodology are increasingly
relevant to practical computational problems.  Professor Valiant's
research is focused on two application areas, machine learning and
parallel computation, as well as the core area.
<p>
A machine that has a learning capability can augment its knowledge by
interacting with its environment; no programmer need understand the
machine's present, possibly unmanageably complicated state of
knowledge.  The goals of Professor Valiant's current research are to
derive models that capture the phenomenon of learning, particularly the
learning of concepts from examples, and to use these models to identify
efficient learning algorithms, as well as the ultimate limits to
computational learning.  For example, in the area of parallel
computation, how efficient and easy to program can general purpose
machines be made?  In this connection, Professor Valiant and his
colleagues have obtained some encouraging results concerning the
possibility of constructing general-purpose, multiprocessor computers
capable of executing parallel algorithms in close to logically optimal
time.  They are also searching for efficient or optimal parallel
algorithms for important computational problems.
<p>
In the core area of complexity theory, relationships of surprising
generality can sometimes be obtained.  For example, relationships
have been found between the difficulty of finding solutions to
combinatorial problems in the case when a unique solution is
guaranteed to the more general case when it is not; other work relates
the problem of randomly generating one solution, from possibly
exponentially many, to the problem of counting these solutions.
<hr>
<ul>
<li><i>A theory of the learnable,</i> Commun. Assoc. Comp. Mach. 27, 
1136 (1984).

<li>M. R. Jerrum, L. G. Valiant, and V. V. Vazirani, <i>Random generation 
of combinatorial structures from a uniform distribution,</i>
Theor. Comp. Sci. 43, 169-188 (1986).

<li><i>Functionality in neural networks,</i> in Proc. 7th Natl. AAAI
Conf. on Artificial Intelligence, (Morgan Kaufmann, San Mateo,
Calif., 1988), p. 629-634.

<li>M. Kearns and L. G. Valiant, <i>Cryptographic limitations on learning Boolean formulae and
finite automata,</i> Proc. 21st ACM Symp. on Theory of Computing,
(ACM Press, New York, 1989), p. 433-444. 

<li><i>A bridging model for parallel computation,</i>
Commun. Assoc. Comp. Mach. 33, 103-111 (1990).
</ul>
</BODY>
</HTML>
