<html><head><title>Example Codes</title></head><body>
<h2> 2D Laplace Example Codes  </h2>

<blockquote>

<a href="ex1.c.html"><h2>Example 1</h2></a>
<p>
This is a two processor example.  Each processor owns one
box in the grid.  For reference, the two grid boxes are those
in the example diagram in the struct interface chapter
of the User's Manual. Note that in this example code, we have
used the two boxes shown in the diagram as belonging
to processor 0 (and given one box to each processor). The
solver is PCG with no preconditioner.
<p>
We recommend viewing examples 1-4 sequentially for
a nice overview/tutorial of the struct interface.
<a href="ex2.c.html"><h2>Example 2</h2></a>
<p>
This is a two processor example and is similar to the previous
structured interface example (Example 1). However, in
this case the grid boxes are exactly those in the example
diagram in the struct interface chapter of the User's Manual.
(Processor 0 owns two boxes and processor 1 owns one box.)
The solver is PCG with SMG preconditioner.
<p>
We recommend viewing example 1 before viewing this
example.
<a href="ex3.c.html"><h2>Example 3</h2></a>
<p>
This code solves a system corresponding to a discretization
of the Laplace equation with zero boundary conditions on the
unit square. The domain is split into an N x N processor grid.
Thus, the given number of processors should be a perfect square.
Each processor's piece of the grid has n x n cells with n x n
nodes connected by the standard 5-point stencil. Note that the
struct interface assumes a cell-centered grid, and, therefore,
the nodes are not shared.  This example demonstrates more
features than the previous two struct examples (Example 1 and
Example 2).  Two solvers are available.
<p>
To incorporate the boundary conditions, we do the following:
Let x_i and x_b be the interior and boundary parts of the
solution vector x. We can split the matrix A as
<p>
<center> A = [A_ii A_ib; A_bi A_bb]. </center>
<p>
Let u_0 be the Dirichlet B.C.  We can simply say that x_b = u_0.
If b_i is the right-hand side, then we just need to solve in
the interior:
<p>
<center> A_ii x_i = b_i - A_ib u_0. </center>
<p>
For this partitcular example, u_0 = 0, so we are just solving
A_ii x_i = b_i.
<p>
We recommend viewing examples 1 and 2 before viewing this
example.
<a href="ex5.c.html"><h2>Example 5</h2></a>
<p>
This example solves the 2-D
Laplacian problem with zero boundary conditions
on an nxn grid.  The number of unknowns is N=n^2.
The standard 5-point stencil is used, and we solve
for the interior nodes only.
<p>
This example solves the same problem as Example 3.
Available solvers are AMG, PCG, PCG with AMG or
Parasails preconditioners, or Flexible GMRES with 
AMG preconditioner.
<a href="ex10.cxx.html"><h2>Example 10</h2></a>
<p>
This code solves a system corresponding to a discretization
of the Laplace equation with zero boundary conditions on the
unit square. The domain is split into a n x n grid of
quadrilateral elements and each processors owns a horizontal
strip of size m x n, where m = n/nprocs. We use bilinear
finite element discretization, so there are nodes (vertices)
that are shared between neighboring processors. The Finite
Element Interface is used to assemble the matrix and solve
the problem. Nine different solvers are available.
<a href="ex12.c.html"><h2>Example 12</h2></a>
<p>
The grid layout is the same as ex1, but with nodal unknowns. The solver is PCG
preconditioned with either PFMG or BoomerAMG, selected on the command line.
<p>
We recommend viewing the Struct examples before viewing this and the other
SStruct examples.  This is one of the simplest SStruct examples, used primarily
to demonstrate how to set up non-cell-centered problems, and to demonstrate how
easy it is to switch between structured solvers (PFMG) and solvers designed for
more general settings (AMG).
<a href="ex13.c.html"><h2>Example 13</h2></a>
<p>
This code solves the 2D Laplace equation using bilinear finite element
discretization on a mesh with an "enhanced connectivity" point.  Specifically,
we solve -Delta u = 1 with zero boundary conditions on a star-shaped domain
consisting of identical rhombic parts each meshed with a uniform n x n grid.
Every part is assigned to a different processor and all parts meet at the
origin, equally subdividing the 2*pi angle there. The case of six processors
(parts) looks as follows:
<p>
<pre>
                                    +
                                   / \
                                  /   \
                                 /     \
                       +--------+   1   +---------+
                        \        \     /         /
                         \    2   \   /    0    /
                          \        \ /         /
                           +--------+---------+
                          /        / \         \
                         /    3   /   \    5    \
                        /        /     \         \
                       +--------+   4   +---------+
                                 \     /
                                  \   /
                                   \ /
                                    +
</pre>
<p>
Note that in this problem we use nodal variables, which will be shared between
the different parts, so the node at the origin, for example, will belong to all
parts.
<p>
We recommend viewing the Struct examples before viewing this and the other
SStruct examples.  The primary role of this particular SStruct example is to
demonstrate how to set up non-cell-centered problems, and specifically problems
with an "enhanced connectivity" point.
<a href="ex14.c.html"><h2>Example 14</h2></a>
<p>
This is a version of <a href="ex13.html">Example 13</a>, which uses the SStruct
FEM input functions instead of stencils to describe a problem on a mesh with an "enhanced
connectivity" point.  This is the recommended way to set up a finite element
problem in the SStruct interface.
<a href="ex16.c.html"><h2>Example 16</h2></a>
<p>
This code solves the 2D Laplace equation using a high order Q3 finite element
discretization.  Specifically, we solve -Delta u = 1 with zero boundary
conditions on a unit square domain meshed with a uniform grid.  The mesh is
distributed across an N x N process grid, with each processor containing an n x
n sub-mesh of data, so the global mesh is nN x nN.

</blockquote>

<center>
<a href="http://www.llnl.gov/CASC/hypre"><img border="0" src="hypre_wiw.gif"></a>
</center>
</body></html>
