Date: Tue, 05 Nov 1996 20:56:39 GMT
Server: NCSA/1.5
Content-type: text/html
Last-modified: Tue, 29 Oct 1996 20:21:42 GMT
Content-length: 8632

<HTML>
<HEAD>
<TITLE> Interesting AI Demos and Projects </TITLE>
</HEAD>

<body bgcolor="#ffffff" vlink="#0060f0" link="#FF3300">
<H1> <!WA0><IMG SRC="http://www.cs.wisc.edu/~dyer/images/bug40.gif">
     Interesting AI Demos and Projects </H1>
<P>
<!WA1><IMG SRC="http://www.cs.wisc.edu/~dyer/images/construction.gif">
  Send suggestions for additions to this list to
  <TT><!WA2><A HREF="mailto:dyer@cs.wisc.edu">dyer@cs.wisc.edu</A></TT>
<P>
<HR>

<H2> Agents </H2>
 <UL>
  <LI> <!WA3><A HREF="http://www-white.media.mit.edu/vismod/demos/ive/ive.html">
	Interactive Video Environment </A> (MIT) <br>
	The Interactive Video Environment (IVE) is an experimental
	"testbed" to explore how Computer Vision and Computer Agents
	technologies can be used to solve problems of human interface and
	human interaction over networks. The goal of the IVE project is to
	develop smart cooperative work/play systems that function
	robustly despite wide variation in network and environmental
	conditions. 

  <P>
  <LI> <!WA4><A HREF="http://www.ffly.com/">Firefly Personal Music Recommendation Agent</A>

  <P>
  <LI> <!WA5><A HREF="http://www.cs.washington.edu/research/projects/softbots/www/softbots.html">
	Internet Softbots </A> (U. Washington) <br>
	Building autonomous agents that interact with
	real-world software environments such as operating systems or databases is a
	pragmatically convenient yet intellectually challenging AI problem. 
	We are utilizing planning and
	machine-learning technology to develop an Internet softbot (software robot),
	a customizable and (moderately) intelligent assistant for Internet access. The
	softbot accepts goals in a high-level language, generates and executes plans
	to achieve these goals, and learns from its experience. 

  <P>
  <LI> <!WA6><A HREF="http://lieber.www.media.mit.edu/people/lieber/Lieberary/Letizia/Letizia-Intro.html">Letizia: A Web Browsing Agent</A> (MIT)
  <P>
  <LI> <!WA7><A HREF="http://agents.www.media.mit.edu:80/cgi-bin/find">Miscellaneous Resources on Agents</A>
 </UL>

<H2> Computer Vision </H2>
 <UL>
  <LI>Image Databases: <!WA8><A HREF="http://wwwqbic.almaden.ibm.com/">IBM</A>, <!WA9><A HREF="http://www.virage.com/cgi-bin/random/">Virage</A>, <!WA10><A HREF="http://www.kodak.com/">Kodak</A>, <!WA11><A HREF="http://www.illustra.com/">Illustra</A>, <!WA12><A HREF="http://www.imageinfo.com/">Image Info</A>
  <LI> <!WA13><A HREF="http://www.cs.unc.edu/~mcmillan/telep.html">Telepresence</A> (North Carolina, Penn, CMU)
  <LI> <!WA14><A HREF="http://www.ius.cs.cmu.edu/IUS/elm_usr0/3DStudio/www/RepVis/RepVis.html">Virtualized Reality</A> (CMU)
  <LI> <!WA15><A HREF="http://www-white.media.mit.edu/vismod/demos/football/football.html">
	Computers Watching Football</A> (MIT)
  <LI> <!WA16><A HREF="http://www.ius.cs.cmu.edu/demos/facedemo.html">
	Face Recognition</A> (CMU)
  <LI> <!WA17><A HREF="http://tns-www.lcs.mit.edu/cgi-bin/vs/vvdemo">
	Image Processing with Live Video Sources </A> (MIT)
  <LI> <!WA18><A HREF="http://www.inria.fr/robotvis/personnel/vthierry/acvis-demo/main.html">
	Active Vision </A> (INRIA, France) (slow link)
  <LI> <!WA19><A HREF="http://www.vrl.com/Imaging/">
	Basic Image Processing Functions </A> (Visioneering Res. Lab)
  <LI> <!WA20><A HREF="http://www.cs.cmu.edu/afs/cs/project/cil/ftp/html/v-demos.html">
	Miscellaneous Computer Vision Demos</A>
 </UL>

<H2> Expert Systems </H2>
 <UL>
  <LI> <!WA21><A HREF="http://vvv.com/ai/demos/gradorig.html">Screening applications for Graduate School</a>
 </UL>

<H2> Game Playing </H2>
 <UL>
  <LI> <!WA22><A HREF="http://www.chess.ibm.park.org/deep/blue/deepblue.html">
	Deep Blue Chess Player</A> (IBM)
  <LI> <!WA23><A HREF="http://web.cs.ualberta.ca:80/~chinook/">
	Chinook Checkers Player</A> (Alberta)
  <LI> <!WA24><A HREF="http://www.labri.u-bordeaux.fr/~loeb/dpp/table.of.contents.html">
	Diplomacy</A>
 </UL>

<H2> Machine Learning </H2>
 <UL>
  <LI> <!WA25><A HREF="http://www.icenet.it/icenet/neurality/links/home_uk.html">
        The ICE Neural Nets Hot List</A> has intro material on artificial
        neural networks, as well as some Java demonstrations.
  <LI> <!WA26><A HREF="http://www.cs.cmu.edu/afs/cs.cmu.edu/project/alv/member/www/projects/ALVINN.html">
	ALVINN - Autonomous Vehicle Navigation using Neural Nets</A> (CMU) <br>
	ALVINN uses neural networks to learn visual servoing. It
	watches a person drive for five minutes, and can
	then take over driving. ALVINN has been trained to drive on
	dirt paths, single-lane country roads, city streets,
	and multi-lane highways.   Click
	<!WA27><A HREF="http://www.cs.cmu.edu/afs/cs.cmu.edu/project/alv/member/www/photos_and_videos/photos_and_videos.html"> here</A>
	for images of the vehicles and videos of ALVINN in action.
  <LI> <!WA28><A HREF="http://vvv.com/ai/demos/whale.html">Whale Identification using a Decision Tree</a>
 </UL>

<H2> Natural Language Processing </H2>
 <UL>
  <LI> <!WA29><A HREF="http://debra.dgbt.doc.ca/chat/chat.html">
	CHAT Natural Language System </A> (Communications Canada) <br>
	CHAT (Conversational Hypertext Access Technology) is a computer
	program developed by the Communications Research Centre that
	provides easy access to electronic information. CHAT provides a
	natural-language interface that allows users to ask English
	questions and receive answers.  

  <P>
  <LI> <!WA30><A HREF="http://fuzine.mt.cs.cmu.edu/mlm/julia.html">
	JULIA</A> (CMU)

  <P>
  <LI> <!WA31><A HREF="http://www.labs.bt.com/innovate/informat/netsumm/index.htm">
	Netsumm - A text summarizer for Web pages</A> (British Telecom)

  <P>
  <LI> <!WA32><A HREF="http://sakharov.ai.mit.edu/start">
	START Natural Language Question Answering </A> (MIT) <br>
	Ask English questions about
	the M.I.T. Artificial Intelligence Laboratory to a natural language
	system called 
        <!WA33><A HREF="http://www.ai.mit.edu/projects/iiip/doc/cl-http/start.html">
	START </A>.  See, for example, snapshots of
	<!WA34><A HREF="http://www.ai.mit.edu/projects/iiip/doc/cl-http/snapshots.html#start-question">
	asking a question </A>, and
	<!WA35><A HREF="http://www.ai.mit.edu/projects/iiip/doc/cl-http/snapshots.html#start-answer">
	viewing the answer </A>.

 </UL>

<H2> Robotics </H2>
 <UL>
  <LI> <!WA36><A HREF="http://maas-neotek.arc.nasa.gov/Dante/dante.html">
	Dante II Walking Robot</A> (CMU) <br>
	The CMU Field Robotics Center (FRC) developed Dante II, a tethered
	walking robot, which explored the Mt. Spurr (Aleutian Range,
	Alaska) volcano in July 1994.  The
	use of robotic explorers, such as Dante II, opens a new era in field
	techniques by enabling scientists to remotely conduct research and
	exploration. 

  <P>
  <LI> <!WA37><A HREF="http://tommy.jsc.nasa.gov:80/er/er6/mrl/videos/">
	Mobile Robot Navigation using Stereo Vision</A> (JPL) <br>
	The NASA JSC Mobile Robot Lab recently demonstrated the
	integration of a stereo vision system with a mobile robot for the
	purpose of following people or other robots. 
	During following, the stereo system is constantly feeding updated
	coordinates of the agent being tracked to the mobile robot system.
	The mobile robot takes these coordinates as goal positions and
	attempts to attain the goal position while avoiding obstacles using
	its sonar sensors.  

  <P>
  <LI> <!WA38><A HREF="http://www.cs.columbia.edu/robotics/projects/visual-control/allen-realtime.html">
	Tracking and Grasping Moving Objects</A> (Columbia) <br>
	Coordination between an organism's sensing modalities
	and motor control system is a hallmark of intelligent behavior,
	and we are pursuing the goal of
	building an integrated sensing and actuation system that
	can operate in dynamic as opposed to
	static environments. The system we are building is a
	multi-sensor system that integrates work in
	real-time vision, robotic arm control and stable
	grasping of objects. Our first attempts at this have
	resulted in a system that can track and stably grasp a
	moving model train in real-time.

  <P>
  <LI> <!WA39><A HREF="http://www.usc.edu/dept/raiders/">
	Robot Tele-operation</A> (USC) <br>
	The MERCURY PROJECT allows users to
	tele-operate a robot arm moving over a terrain filled with buried artifacts. A CCD
	camera and pneumatic nozzle mounted on the robot allow users to select
	viewpoints and to direct short bursts of compressed air into the terrain. Thus
	users can "excavate" regions within the sand by positioning the arm, delivering a
	burst of air, and viewing the newly cleared region. 

  <P>
  <LI> <!WA40><A HREF="http://piglet.cs.umass.edu:4321/robotics-mpegs.html">
	Miscellaneous Robot Demos</A>
 </UL>

<H2> Speech </H2>
 <UL>
 <LI><!WA41><A HREF="http://www.research.digital.com/CRL/personal/waters/DECface.html">DECface talking synthetic face</A> (DEC)
 <LI><!WA42><A HREF="http://www.labs.bt.com/innovate/speech/laureate/index.htm">Laureate text-to-speech synthesis system</A> (British Telecom)
 </UL>

</BODY>
</HTML>
