Server: Netscape-Commerce/1.12
Date: Tuesday, 26-Nov-96 00:06:45 GMT
Last-modified: Thursday, 15-Jun-95 00:34:44 GMT
Content-length: 4625
Content-type: text/html

<!doctype html public "-//W30//DTD W3 HTML 2.0//EN">

<HTML>


<TITLE>COMPUTER ARCHITECTURE GROUP</TITLE>
<center>
<!WA0><A HREF="http://www.cag.lcs.mit.edu/"><!WA1><IMG SRC=http://www.lcs.mit.edu/web_project/Brochure/cag/cagline.gif></a>
</center>
<p>
<center>
<table border>
<tr>
	<td><!WA2><img src=http://www.lcs.mit.edu/web_project/Brochure/cag/ward.gif></td>
	<td width=100 rowspan=2><br></td>
	<td><!WA3><img src=http://www.lcs.mit.edu/web_project/Brochure/cag/agarwal.gif></td>
</tr>
<tr>
<td> <address><b>Stephen A. Ward</b></a>,<br>Professor of Computer <br>Science and Engineering</address>
	</td><td><!WA4><A HREF="http://cag-www.lcs.mit.edu/~agarwal"><address><b>Anant Agarwal</b></a>,<br>Jamieson Career Development<br>Associate Professor of Computer Science</address></td>
</tr>
</table>
</center>
<body>
<p>

Among the Computer Architecture group's research is a
project called NuMesh, which effectively combines
"Tinkertoy" modularity with supercomputer performance.
NuMesh describes a packaging and interconnect technology
that supports high-bandwidth systolic communications on a
novel 3D four-neighbor nearest-neighbor lattice. NuMesh
modules simply plug together rather than being connected by
printed circuit traces or backplane buses.
<p>
<!WA5><A HREF="http://cag-www.lcs.mit.edu:80/numesh/">NuMesh</a> explores an engineering discipline in which physical
location of components are accounted for explicitly, rather
than being abstracted out of the logical model. Simple
engineering models are provided via software tools instead
of hardware generality, much as RISC maintains its
programming model through compilation rather than through
processor complexity. Modularity of the communications
substrate provides regularity in the compilation target and
allows iteration of a single communication building block
to replace a variety of ad hoc communication paths.
<center>
<table border>
<tr>
	<td><!WA6><img src=http://www.lcs.mit.edu/web_project/Brochure/cag/pratt.gif></td>
	<td width=100 rowspan=2><br></td>
	<td><!WA7><img src=http://www.lcs.mit.edu/web_project/Brochure/cag/krantz.gif></td>
</tr>
<tr>
<td> <address><b>Gill Pratt</b><br>	Assistant Professor of Computer <br>Science and Engineering</address>
	</td><td><!WA8><A HREF="http://cag-www.lcs.mit.edu/~krantz"><address><b>David A. Kranz,</b></a><br>Research Associate</address></td>
</tr>
</table>
</center>
<p>
The cost/performance advantages of this approach stem from
three factors. First, physical component placement is
optimized as part of the logical design. Secondly, the
underlying communication technology can be designed for
performance rather than interconnect flexibility. And
thirdly, interconnection relies on mass-produced modules,
not on configuration-specific wiring.
<p>
NuMesh research embraces several technologies, including
architecture of the communications substrate; clocking and
communication technologies; compilers and programming
models, and representative applications. Our initial focus
has been the important class of algorithms whose static
communication patterns can be precompiled into a system of
independent but carefully choreographed finite-state
machine descriptions. We also are exploring the extension
of NuMesh to more general communication -- to support
dynamic routing, for example.
<p>
On another front, the <!WA9><A HREF="http://cag-www.lcs.mit.edu:80/alewife/">Alewife</a> project was created to design
a scalable, cache-coherent, shared-memory multiprocessor.
In this program, several thousand VLSI processors, each
associated with a portion of shared memory, are
interconnected via a multistage network. Unlike
conventional shared-memory machines, this multiprocessor
exploits locality of referencing at the hardware and
software levels to maximize available memory bandwidth. A
new distributed directory ensures coherence of the
high-speed caches each processor uses to store both private
and shared data, thus further utilizing locality.
<p>
Research focuses on data collection methods, analytical and
simulation techniques for evaluating large-scale parallel
computers, and designing and building new interconnection
networks and processor-cache memory systems.
<p>
An important goal is to couple the design of algorithms for
compilers and operating systems to their goal-resultant
impact on multiprocessor performance. Address traces
obtained using various data collection techniques help
evaluate architectural choices. This insight in turn feeds
back to software development.
<p>
A major part of the project investigates hardware
technologies and packaging for future high-density,
low-latency interconnections.

</BODY>
<p>
<!WA10><a href="http://www.lcs.mit.edu/web_project/Brochure/contents.html"><!WA11><img align=left src=http://www.lcs.mit.edu/web_project/Brochure/icons/contents_motif.gif></a>
<!WA12><a href="http://www.lcs.mit.edu/web_project/Brochure/cva/cva.html"><!WA13><img align=left src=http://www.lcs.mit.edu/web_project/Brochure/icons/previous_group_motif.gif></a>
<!WA14><a href="http://www.lcs.mit.edu/web_project/Brochure/csg/csg.html"><!WA15><img align=left src=http://www.lcs.mit.edu/web_project/Brochure/icons/next_group_motif.gif></a>
</HTML>
