\chapter{Parallelizing Loops}

\begin{itemize}

\item{Chore Graphs}

\item{Example: Warshall's Algorithm}

\item{Example: Longest Common Subsequence (LCS)}

\item{Example: Shell Sort}

\end{itemize}
Loops are probably the best place to look for parallelism; they account for most of the computing time in most programs, and they have a small amount of code that can be packaged into runnables. When converting loops to parallel threads, there are a number of things to consider:
\par
Some loops are trivially parallel: Each iteration is independent of the others. Some loops are serial; their iterations must be completed in order. Some nestings of serial loops can be parallelized by staggering (or skewing) the executions of the inner loops into pipelines.
\par
The grain size is always a consideration. If the body of the loop is too short, it is not worth executing separately. It is faster to run several iterations of the loop in each thread. Several iterations take longer than one, but they require fewer initializations of threads or runnables, and that saves time.
\par
Alternatively, inner serial loops can provide a natural grain size for the computation. Suppose every element of a two-dimensional array can be updated independently. A single element update is probably not worth creating a thread, but an entire row or an entire column might be.
\par
As with parallel executions of subroutines, the number of processors available is another consideration. Creating too few threads won't use all the parallelism available. Creating too many will waste the overhead required to create them.
\par

\section{Chore Graphs}
Chore graphs (usually called task graphs) provide a way of describing the dependencies among parts of a computation. We will not be formalizing them. We will be using them for sketching a computation and describing how it might be parallelized.
\par
A chore is a piece of computation executed sequentially. A chore graph is a directed graph that represents a computation. The chores are represented by nodes. The directed edges represent the dependencies. The chore at the source of the edge must be completed before the chore at the destination of the edge can be started. There are various sorts of dependencies. Control dependencies are exemplified by
\texttt{if}
statements, where some chore is only to be done if some test yields true. Data dependencies refer to one chore reading data that another writes or two chores writing data that must be written in the correct order. On a multiprocessor machine, it should be possible to execute chores in parallel if there are no dependencies between them.
\par
It is the intention that chores do not communicate with other chores while they are executing. We want all other chores that chore
\emph{x}
depends on to complete before
\emph{x}
is scheduled. We do not want
\emph{x}
to block waiting for another thread to do something. This is to make it easier to figure out how to allocate chore graphs to parallel threads.
\par

\includegraphics{figures/ParLoops-1}

\ref{ParLoops.xml#id(64787)}[MISSING HREF]
represents a chore graph for a loop, each of whose iterations is independent. The 0, 1, 2, ...
\emph{n}
-1 are the iteration numbers for the loop. They may or may not be the values of a loop index. The loop might not use an index variable.
\par
If we want to describe an entire computation, we will have an acyclic chore graph. The nodes represent actual executions of chores.
\par
Unfortunately, chore graphs for full executions are usually too large and too data dependent to draw in their entirety, so we will often draw cyclic graphs to represent loops and let dashed nodes represent entire subgraphs, particularly where the nodes are subroutine calls.
\ref{ParLoops.xml#id(67418)}[MISSING HREF]
shows chore graphs for some algorithms discussed in the previous chapter, ``
\ref{../B/ParSubs.xml#id(35322)}[MISSING HREF]
.''
\par

\includegraphics{figures/ParLoops-2}

\subsubsection{Gathering Chores into Threads}
Paths of chores in a chore graph can be assigned to a thread. Each path has its own class with its code in a
\texttt{run()}
method. There is no synchronization problem between chores in the path assigned to an individual thread, since these are done in the proper order by the flow of control. There is a problem synchronizing chores in different threads. Crossthread synchronization can be done by having the source
\emph{up}
a semaphore that the destination
\emph{downs}
, by setting a
\texttt{Future}
object that the destination awaits with
\texttt{getValue()}
, by putting a value into a queue that the destination reads from, or by some other mechanism.
\par

\section{Example: Warshall's Algorithm}
We will use Warshall's algorithm as an example in describing a number of techniques. Warshall's algorithm solves a number of similar problems:
\par

\begin{itemize}

\item{Given a directed graph, find which nodes are connected by paths.}

\item{Given a relation R , find the transitive closure of the relationship, R *. That is, if there are elements x 1 , x 2 ,..., x n such that x 1 R x 2 , x 2 R x 3 ,..., xn -1 R xn , then x 1 R * xn .}

\item{Given a network (i.e., a graph with distances associated with edges), find the shortest path between each pair of nodes.1}

\ref{#id(pgfId-10056)}[MISSING HREF]

\end{itemize}
Warshall's algorithm expresses a graph or relationship as a boolean matrix. Let
\emph{A}
be the matrix. For graphs,
\emph{A}
is an adjacency matrix.
\emph{Aij}
is true if and only if there is an edge between the nodes numbered
\emph{i}
and
\emph{j}
in the graph. Similarly, for relationships,
\emph{Aij}
is true if and only if the elements numbered
\emph{i}
and
\emph{j}
are related in
\emph{R}
. For the shortest-path problem, we use an array of numbers where
\emph{Aij}
is infinity if nodes
\emph{i}
and
\emph{j}
are not adjacent and contains the distance between them otherwise.
\par
Warshall's algorithm transforms the matrix in place. The description of the result varies with the problem being solved. The graph represented by the adjacency matrix is converted into another graph where there is an edge for every path in the original graph. The relationship represented by the input matrix is converted into its transitive closure. The matrix representing the network is converted into a matrix showing the minimum distances between nodes.
\par
Warshall's algorithm is shown in
\ref{ParLoops.xml#id(66607)}[MISSING HREF]
. It consists of three nested loops, the inner two of which can be executed in parallel. The best way to understand it is to think of it in terms of graphs.
\ref{ParLoops.xml#id(51533)}[MISSING HREF]
shows the operation of Warshall's algorithm. Suppose this figure represents a shortest path from node a to node z. We will ignore the trivial case where the path has only one edge and no internal nodes. Let k'be the lowest numbered internal node. When the outer loop executes for index
\emph{k}
= k', at some point the middle loop will set
\emph{i}
to i', and the inner loop will set
\emph{j}
to j'. At that point, A[i][k] will be true, since there is an edge from i'to k', and A[k][j] will be true since there is an edge from k'to j'. Therefore A[i][k] and A[k][j] is true, which sets A[i][j] true. This draws an edge from i'to j',bypassing k'. There is now a shorter path from a to z. If this new path has more than one edge, it has a lowest numbered internal node with a number higher than k'. The process of bypassing internal nodes will continue until all internal nodes have been bypassed and there is an edge directly from a to z.
\par

	%\begin{tabular}
	Warshall's algorithm.
	\begin{verbatim}
	

for (k=0; k<N;k++) 
    for (i=0; i<N; i++)
        for (j=0; j<N; j++)
	\end{verbatim}
	
	\begin{verbatim}
	

            A[i][j] = A[i][j] || (A[i][k] && A[k][j]) ;
	\end{verbatim}
	
	%\end{tabular}
	
\includegraphics{figures/ParLoops-3}
The algorithm works on all paths at the same time, bypassing all paths by adding direct edges from source to destination. The graph representing the resulting matrix will have an edge in the resulting graph for every path in the original graph.
\par
The outer loop must execute one iteration to completion before beginning the next. Each node must be bypassed entirely, so that the new edges can be seen when processing the next node. The inner loops, however, can be executed completely in parallel.
\ref{ParLoops.xml#id(10360)}[MISSING HREF]
shows the chore graph for Warshall's algorithm.
\par

\includegraphics{figures/ParLoops-4}
The Floyd variant for solving the shortest-path problem is shown in
\ref{ParLoops.xml#id(26938)}[MISSING HREF]
.
\par

	%\begin{tabular}
	Warshall/Floyd shortest path algorithm.
	\begin{verbatim}
	

for (k=0; k<N;k++) 
    for (i=0; i<N; i++)
       for (j=0; j<N; j++)
	\end{verbatim}
	
	\begin{verbatim}
	

           A[i][j] = Math.min(A[i][j], A[i][k] + A[k][j]);
	\end{verbatim}
	
	%\end{tabular}
	
\subsubsection{Static Scheduling}
Our first implementation of Warshall's algorithm,
\texttt{Warshall1}
, is shown in
\ref{ParLoops.xml#id(68326)}[MISSING HREF]
. An instance of a
\texttt{Warshall1}
object,
\texttt{x}
, is created specifying the number of threads to use in computing the transitive closure. A transitive closure can then be performed on a boolean matrix
\texttt{A}
by passing it to
\texttt{Warshall1}
's
\texttt{closure}
method,
\texttt{x.closure(A)}
.
\par

	%\begin{tabular}
	Warshall's algorithm, version 1
	\begin{verbatim}
	

class Warshall1{
	\end{verbatim}
	
	\begin{verbatim}
	

 int numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

 public Warshall1(int numThreads){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.numThreads=numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private class Close extends Thread{
	\end{verbatim}
	
	\begin{verbatim}
	

   boolean[][] a; int t; SimpleBarrier b; Accumulator done;
	\end{verbatim}
	
	\begin{verbatim}
	

   Close(boolean[][] a, int t, SimpleBarrier b, 
	\end{verbatim}
	
	\begin{verbatim}
	

              Accumulator done){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.a=a;this.t=t;this.b=b;this.done=done;
	\end{verbatim}
	
	\begin{verbatim}
	

   }
	\end{verbatim}
	
	\begin{verbatim}
	

   public void run() {
	\end{verbatim}
	
	\begin{verbatim}
	

     try {
	\end{verbatim}
	
	\begin{verbatim}
	

        int i,j,k;
	\end{verbatim}
	
	\begin{verbatim}
	

        for (k=0;k<a.length;k++) {
	\end{verbatim}
	
	\begin{verbatim}
	

            for (i=t;i<a.length;i+=numThreads) {
	\end{verbatim}
	
	\begin{verbatim}
	

                if (a[i][k]) 
	\end{verbatim}
	
	\begin{verbatim}
	

                    for(j=0;j<a.length;j++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                        a[i][j] = a[i][j] | a[k][j];
	\end{verbatim}
	
	\begin{verbatim}
	

                    }
	\end{verbatim}
	
	\begin{verbatim}
	

            }
	\end{verbatim}
	
	\begin{verbatim}
	

            b.gather();
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        done.signal();
	\end{verbatim}
	
	\begin{verbatim}
	

     } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

   }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public void closure(boolean[][] a) {
	\end{verbatim}
	
	\begin{verbatim}
	

        int i;
	\end{verbatim}
	
	\begin{verbatim}
	

        Accumulator done=new Accumulator(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

        SimpleBarrier b=new SimpleBarrier(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

        for (i=0;i<numThreads;i++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                new Close(a,i,b,done).start();
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        try {
	\end{verbatim}
	
	\begin{verbatim}
	

            done.getFuture().getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

        } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	Several threads run the inner loops in parallel. Each of them is an instance of class
\texttt{Close}
. The closure method creates one
\texttt{Close}
object for each thread and passes it the following four things:
\par

\begin{enumerate}

\item{The boolean array.}

\item{The thread's number: 0, 1,...,( numThreads- 1).}

\item{A SimpleBarrier object for the threads to use to synchronize on between processing iterations of the outer k loop.}

\item{An Accumulator object to use to signal when the threads are finished running the algorithm.}

\end{enumerate}
There are
\emph{n}

**2inner computations that could be done in parallel. They are too small to assign each to a separate thread. Each of the
\emph{n}
rows could be done separately, but that's again too many. The solution we use is to give each of the
\emph{t}
threads
\emph{n}
/
\emph{t}
of the rows. We number the threads from 0 to
\emph{t}
-1. Thread
\emph{i}
takes rows
\emph{i}
,
\emph{i}
+
\emph{t}
,
\emph{t}
+ 2
\emph{t}
, .... This is an instance of static scheduling, since the division of labor is determined before the threads are run. Alternative partitionings would put contiguous sequences of rows together. Several methods for static allocation of rows are shown in the sidebar entitled ``
\ref{ParLoops.xml#id(30414)}[MISSING HREF]
'' on page 179.
\par

	%\begin{tabular}
	Static allocationSuppose we have
\emph{N}
rows of an array that must be processed. How can we divide them evenly among
\emph{P}
threads?
\par

\begin{enumerate}

\item{We could give}

\includegraphics{figures/ParLoops-5}
rows to each of the first
\emph{P}
-1 threads and the remaining rows to the last thread.
\ref{#id(pgfId-11663)}[MISSING HREF]
If
\emph{N}
=15 and
\emph{P}
= 4, threads 0, 1, and 2 get 3 rows, and thread 3 gets 6. The load is unbalanced, and the completion time will be dominated by the last thread.
\end{enumerate}

\begin{enumerate}

\item{We could give}

\includegraphics{figures/ParLoops-7}
rows to each of the first
\emph{P}
-1 threads and the remaining rows to the last.
\ref{#id(pgfId-11673)}[MISSING HREF]
If
\emph{N}
= 21 and
\emph{P}
= 5, we would assign 5 rows to each of the first 4 threads and 1 to the last. The last is underutilized, but it's not as severe as case (1) where the last thread was the bottleneck.
\end{enumerate}

\begin{enumerate}

\item{We could try to assign the rows so that no thread has more than one row more than any other thread. An easy way to do this is to assign thread i all rows j such that j modulus P = i , assuming that rows are numbered 0 through N - 1 and that threads are numbered 0 through P - 1. Thread i will contain rows i , i + P , i +2 P , i +3 P ,....}

\item{We can assign blocks of rows to threads as in (1) and (2), but guarantee that no thread have more than one more row than any other as in (3). Assign thread i the rows in the range Li to Ui inclusive, where}

\end{enumerate}

\includegraphics{figures/ParLoops-9}

	%\end{tabular}
	The performance of Warshall's algorithm with static scheduling is shown in
\ref{ParLoops.xml#id(93740)}[MISSING HREF]
. The horizontal axis shows the number of rows and columns. The number of elements is the square of that, and the number of operations is the cube. It was run on a dual-processor system running Solaris. The Java system used kernel threads. Two threads, matching the number of processors, performed best.
\par

\subsubsection{Dynamic Scheduling}
A risk of static scheduling is that, having divided up the work evenly among
\emph{t}
threads, there might not be
\emph{t}
processors available to execute them. Several threads may run in parallel, but the completion of an iteration of the
\emph{k}
loop will be delayed until other threads can be given processors. If, for example, we created four threads, but only three processors were available, then three threads might run to completion, and then two of the processors would have to wait while the remaining thread runs.
\par

\includegraphics{figures/ParLoops-10}
This would be ameliorated somewhat by time slicing. The three processors could be swapped among the four threads by the operating system. But even swapping processors is a bit expensive compared with just leaving the processor allocated to a single thread.
\par
Dynamic scheduling doesn't allocate all the work to the threads in even chunks, but allows threads to allocate more work when they are done with one chunk. It allows threads that have a processor to continue running.
\par
Our second implementation of Warshall's algorithm,
\texttt{Warshall2}
, uses dynamic allocation. Each thread requests a sequence of rows to process from a dynamic allocation object. Abstract class
\texttt{DynAlloc}
(see
\ref{ParLoops.xml#id(83647)}[MISSING HREF]
) shows the public interface of our dynamic allocation objects. A call to method
\texttt{alloc()}
will fill in a
\texttt{Range}
object with the bounds of a range of values (e.g., row numbers) to process. It will return a boolean to indicate whether any rows were allocated (
\texttt{true}
) or whether the iteration is complete (
\texttt{false}
).
\par

	%\begin{tabular}
	DynAlloc class
	\begin{verbatim}
	

public abstract class DynAlloc {
	\end{verbatim}
	
	\begin{verbatim}
	

 public static class Range{int start,end,num;}
	\end{verbatim}
	
	\begin{verbatim}
	

 public abstract boolean alloc(Range r);
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	The fields of
\texttt{DynAlloc.Range}
are as follows:
\par

\begin{itemize}

\item{start and end : The allocated range of values is from start up to, but not including, end , the semiopen interval [ start , end ).}

\item{num : This is the number of values in the interval. It is redundant, since it equals end - start .}

\end{itemize}
An implementation of
\texttt{DynAlloc}
is
\texttt{DynAllocShare}
, which is shown in
\ref{ParLoops.xml#id(93740)}[MISSING HREF]
.
\texttt{DynAlloc}
and
\texttt{DynAllocShare}
are best discussed together.
\par

	%\begin{tabular}
	DynAllocShare class
	\begin{verbatim}
	

public class DynAllocShare extends DynAlloc{
	\end{verbatim}
	
	\begin{verbatim}
	

 int range;int nt;int min;
	\end{verbatim}
	
	\begin{verbatim}
	

 int zc; int current;
	\end{verbatim}
	
	\begin{verbatim}
	

 public DynAllocShare(int range,int nt,int min){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.range=range;
	\end{verbatim}
	
	\begin{verbatim}
	

        this.nt=nt;
	\end{verbatim}
	
	\begin{verbatim}
	

        this.min=min;
	\end{verbatim}
	
	\begin{verbatim}
	

        zc=0;
	\end{verbatim}
	
	\begin{verbatim}
	

        current=0;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public synchronized boolean alloc(Range r){
	\end{verbatim}
	
	\begin{verbatim}
	

        if (current>=range){
	\end{verbatim}
	
	\begin{verbatim}
	

                zc++;
	\end{verbatim}
	
	\begin{verbatim}
	

                if (zc>=nt) {
	\end{verbatim}
	
	\begin{verbatim}
	

                        current=0;
	\end{verbatim}
	
	\begin{verbatim}
	

                        zc=0;
	\end{verbatim}
	
	\begin{verbatim}
	

                }
	\end{verbatim}
	
	\begin{verbatim}
	

                r.start=r.end=range;
	\end{verbatim}
	
	\begin{verbatim}
	

                r.num=0;
	\end{verbatim}
	
	\begin{verbatim}
	

                return false;
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        r.start=current;
	\end{verbatim}
	
	\begin{verbatim}
	

        int rem=range-current;
	\end{verbatim}
	
	\begin{verbatim}
	

        int num=(rem+nt-1)/nt; //ceiling(rem/nt)
	\end{verbatim}
	
	\begin{verbatim}
	

        if (num<min) num=min;
	\end{verbatim}
	
	\begin{verbatim}
	

        if (num>rem) num=rem;
	\end{verbatim}
	
	\begin{verbatim}
	

        current+=num;
	\end{verbatim}
	
	\begin{verbatim}
	

        r.end=current;
	\end{verbatim}
	
	\begin{verbatim}
	

        r.num=num;
	\end{verbatim}
	
	\begin{verbatim}
	

        return true;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	A
\texttt{DynAllocShare}
object has the policy of allocating on each call
\includegraphics{figures/ParLoops-11}
of the remaining numbers in the range, where
\emph{n}
is the number of threads. It allocates large blocks of numbers to the first calls and smaller blocks on subsequent calls. The idea is that the larger blocks can be processed without delays for synchronization. The smaller blocks later fill in the schedule to even up the processing time. However, to avoid allocating blocks that are too small for efficient processing,
\texttt{DynAllocShare}
takes a minimum size of block to allocate and will not allocate any block smaller than that until the very last allocation.
\par
The
\texttt{DynAllocShare}
objects are self-resetting. They keep track of the number of calls in a row that have returned false and given empty ranges. Once that number is equal to the number of threads, the object is reset and will start allocating over again. This allows the
\texttt{DynAllocShare}
objects to be reused in inner loops, rather than requiring new ones be allocated for each iteration.
\par
The parameters and local fields of
\texttt{DynAllocShare}
are as follows:
\par

\begin{itemize}

\item{range : This is the upper bound on the range of integers to return. The DynAllocShare object will allocate contiguous blocks of integers in the range [0, range ) (i.e., from zero up to, but not including, range ).}

\item{nt : This is the number of threads. It used both to compute the next block size to allocate (1/ nt of the remaining range) and to determine when to reset the object (after allocating nt zero-length ranges).}

\item{min : This is the minimum size block to allocate until there are fewer than that left.}

\item{zc : This is the zero count, the number of zero-sized ranges allocated. When zc equals nt , the object will reset.}

\item{current : This stores the beginning number of the next block to allocate. When current equals range , there are no more values to allocate.}

\end{itemize}

\texttt{Warshall2}
, shown in
\ref{ParLoops.xml#id(38667)}[MISSING HREF]
, is Warshall's algorithm with dynamic scheduling. The major difference from
\texttt{Warshall1}
is that the
\texttt{Close}
threads are given a
\texttt{DynAlloc}
object
\texttt{d}
as one of their parameters. A
\texttt{Close}
thread has an extra loop, while(d.alloc(r)), around its outer
\texttt{for}
loop, allocating ranges of rows to process. When
\texttt{alloc}
returns
\texttt{false}
, there are no more rows to process, so the thread drops down to the
\texttt{gather}
call at the end of its outer
\texttt{for}
loop.
\par
The performance of Warshall's algorithm with dynamic scheduling is shown in
\ref{ParLoops.xml#id(38667)}[MISSING HREF]
. As with
\ref{ParLoops.xml#id(54565)}[MISSING HREF]
, the horizontal axis shows the number of rows and columns. The number of elements is the square of that, and the number of operations is the cube. The program was run on a dual-processor system running Solaris. The Java system used kernel threads. Two threads, matching the number of processors, performed best.
\par

\includegraphics{figures/ParLoops-12}
A comparison of the two implementations of Warshall's algorithm is shown in
\ref{ParLoops.xml#id(32322)}[MISSING HREF]
. The runs using two threads are compared. The two algorithms perform much alike.
\par

	%\begin{tabular}
	Warshall2 : Warshall's algorithm with dynamic allocation.
	\begin{verbatim}
	

class Warshall2{
	\end{verbatim}
	
	\begin{verbatim}
	

 int numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

 public Warshall2(int numThreads){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.numThreads=numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private class Close extends Thread{
	\end{verbatim}
	
	\begin{verbatim}
	

    boolean[][] a; 
	\end{verbatim}
	
	\begin{verbatim}
	

    DynAlloc d; SimpleBarrier b; Accumulator done;
	\end{verbatim}
	
	\begin{verbatim}
	

    Close(boolean[][] a, DynAlloc d,
	\end{verbatim}
	
	\begin{verbatim}
	

            SimpleBarrier b, Accumulator done){
	\end{verbatim}
	
	\begin{verbatim}
	

            this.a=a; 
	\end{verbatim}
	
	\begin{verbatim}
	

            this.d=d; this.b=b; this.done=done;
	\end{verbatim}
	
	\begin{verbatim}
	

    }
	\end{verbatim}
	
	\begin{verbatim}
	

    public void run() {
	\end{verbatim}
	
	\begin{verbatim}
	

      try {
	\end{verbatim}
	
	\begin{verbatim}
	

        int i,j,k;
	\end{verbatim}
	
	\begin{verbatim}
	

        DynAlloc.Range r=new DynAlloc.Range();
	\end{verbatim}
	
	\begin{verbatim}
	

        for (k=0;k<a.length;k++) {
	\end{verbatim}
	
	\begin{verbatim}
	

            while(d.alloc(r)){
	\end{verbatim}
	
	\begin{verbatim}
	

              for (i=r.start;i<r.end;i++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                if (a[i][k]) 
	\end{verbatim}
	
	\begin{verbatim}
	

                    for(j=0;j<a.length;j++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                        a[i][j] = a[i][j] | a[k][j];
	\end{verbatim}
	
	\begin{verbatim}
	

                    }
	\end{verbatim}
	
	\begin{verbatim}
	

              }
	\end{verbatim}
	
	\begin{verbatim}
	

            }
	\end{verbatim}
	
	\begin{verbatim}
	

            b.gather();
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        done.signal();
	\end{verbatim}
	
	\begin{verbatim}
	

      } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

    }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public void closure(boolean[][] a) {
	\end{verbatim}
	
	\begin{verbatim}
	

    int i;
	\end{verbatim}
	
	\begin{verbatim}
	

    Accumulator done=new Accumulator(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

    SimpleBarrier b=new SimpleBarrier(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

    DynAllocShare d=new DynAllocShare(a.length,numThreads,2);
	\end{verbatim}
	
	\begin{verbatim}
	

    for (i=0;i<numThreads;i++) {
	\end{verbatim}
	
	\begin{verbatim}
	

            new Close(a,d,b,done).start();
	\end{verbatim}
	
	\begin{verbatim}
	

    }
	\end{verbatim}
	
	\begin{verbatim}
	

    try {
	\end{verbatim}
	
	\begin{verbatim}
	

        done.getFuture().getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

    } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	
\par

\includegraphics{figures/ParLoops-13}

\section{Example: Longest Common Subsequence}
A longest common subsequence (LCS) of two strings is a longest sequence of characters that occurs in order in the two strings. It differs from the
\emph{longest common}
substring in that the characters in the longest common subsequence need not be contiguous. There may, of course, be more than one LCS, since there may be several subsequences with the same length.
\par
There is a folk algorithm to find the length of the LCS of two strings.
\ref{#id(pgfId-10097)}[MISSING HREF]
The algorithm uses a form of dynamic programming. In divide-and-conquer algorithms, recall that the overall problem is broken into parts, the parts are solved individually, and the solutions are assembled into a solution to the overall problem. Dynamic programming is similar, except that the best way to divide the overall problem into parts is not known before the subproblems are solved, so dynamic programming solves all subproblems and then finds the best way to assemble them.
\par
The algorithm works as follows: Let the two strings be
\texttt{c0}
and
\texttt{c1}
. Create a two-dimensional array
\texttt{a}

\par

\begin{verbatim}
int [][] a=new int[c0.length()+1] [c1.length()+1]
\end{verbatim}
We will fill in the array so that a[i][j] is the length of the LCS of
\texttt{c0.substring(0,i)}
and
\texttt{c1.substring(0,j)}
. Recall that
\texttt{s.substring(m,n)}
is the substring of
\texttt{s}
from position
\texttt{m}
up to, but not including, position
\texttt{n}
.
\par
Initialize
\texttt{a[i][0]}
to 0 for all
\texttt{i}
and
\texttt{a[0][j]}
to 0 for all
\texttt{j}
, since there are no characters in an empty substring. The other elements,
\texttt{a[i][j]}
, are filled in as follows:
\par

	\begin{verbatim}
	

   if (c0.charAt(i-1) == c1.charAt(j-1)) a[i][j]=a[i-1][j-1]+1;
	\end{verbatim}
	
	\begin{verbatim}
	

   else a[i][j]=Math.max(a[i][j-1],a[i-1][j]);
	\end{verbatim}
	Why? Element
\texttt{a[i-1][j-1]}
has the length of the LCS of string
\texttt{c0.substring(0,i-1)}
and
\texttt{c1.substring(0,j-1)}
. If elements c0.charAt(i-1) and c1.charAt(j-1) are equal, that LCS can be extended by one to length
\texttt{a[i-1] [j-1]+1}
. If these characters don't match, then what? In that case, we ignore the last character in one or the other of the strings. The LCS is either
\texttt{a[i][j-1]}
or
\texttt{a[i-1][j]}
, representing the maximum length of the LCS for all but the last character of
\texttt{c1.substring(0,j-1)}
or
\texttt{c0.substring(0,i-1)}
, respectively.
\par
The chore graph for calculation of the LCS is shown in
\ref{ParLoops.xml#id(54975)}[MISSING HREF]
. Any order of calculation that is consistent with the dependencies is permissible. Two are fairly obvious: (1) by rows, top to bottom, and (2) by columns, left to right.
\par

\includegraphics{figures/ParLoops-14}
Another possibility is along diagonals. All
\texttt{a[i][j]}
,
\texttt{}
where
\texttt{i+j==m}
can be calculated at the same time, for
\texttt{m}
stepping from 2 to
\texttt{c0.length()+c1.length()}
. This order of calculation is pictured in
\ref{ParLoops.xml#id(54565)}[MISSING HREF]
. Visualizing waves of computation passing across arrays is a good technique for designing parallel array algorithms. It has been researched under the names systolic arrays and wavefront arrays.
\par

\includegraphics{figures/ParLoops-15}
Our implementation of the LCS algorithm divides the array into vertical bands and is pictured in
\ref{ParLoops.xml#id(90163)}[MISSING HREF]
. Each band is filled in row by row from top to bottom. Each band (except the leftmost) must wait for the band to its left to fill in the last element of a row before it can start can start filling in that row. This is an instance of the producer-consumer releationship.
\par

\includegraphics{figures/ParLoops-16}

\par
The code for class
\texttt{LCS}
is shown in
\ref{ParLoops.xml#id(38832)}[MISSING HREF]
. It has the following fields:
\par

	%\begin{tabular}
	Class LCS
	\begin{verbatim}
	

class LCS {
	\end{verbatim}
	
	\begin{verbatim}
	

 int numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

 char [] c0; char [] c1;
	\end{verbatim}
	
	\begin{verbatim}
	

 int[][] a; 
	\end{verbatim}
	
	\begin{verbatim}
	

 Accumulator done;
	\end{verbatim}
	
	\begin{verbatim}
	

 public LCS ( char [] c0, char [] c1, int numThreads){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.numThreads=numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

        this.c0=c0;
	\end{verbatim}
	
	\begin{verbatim}
	

        this.c1=c1;
	\end{verbatim}
	
	\begin{verbatim}
	

        int i;
	\end{verbatim}
	
	\begin{verbatim}
	

        done=new Accumulator(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

        a=new int[c0.length+1][c1.length+1];
	\end{verbatim}
	
	\begin{verbatim}
	

        Semaphore left=new Semaphore(c0.length),right;
	\end{verbatim}
	
	\begin{verbatim}
	

        for (i=0;i<numThreads;i++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                right=new Semaphore();
	\end{verbatim}
	
	\begin{verbatim}
	

                new Band(
	\end{verbatim}
	
	\begin{verbatim}
	

                     startOfBand(i,numThreads,c1.length),
	\end{verbatim}
	
	\begin{verbatim}
	

                     startOfBand(i+1,numThreads,c1.length)-1,
	\end{verbatim}
	
	\begin{verbatim}
	

                     left,right).start();
	\end{verbatim}
	
	\begin{verbatim}
	

                left=right;
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public LCS ( String s0, String s1, int numThreads){
	\end{verbatim}
	
	\begin{verbatim}
	

        this(s0.toCharArray(),s1.toCharArray(),numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private class Band extends Thread{
	\end{verbatim}
	
	\begin{verbatim}
	

        // see Example 6-8
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 int startOfBand(int i,int nb,int N) {
	\end{verbatim}
	
	\begin{verbatim}
	

        return 1+i*(N/nb)+Math.min(i,N%nb);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public int getLength() {
	\end{verbatim}
	
	\begin{verbatim}
	

        try {
	\end{verbatim}
	
	\begin{verbatim}
	

            done.getFuture().getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

        } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

        return a[c0.length][c1.length];
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public int[][] getArray() {
	\end{verbatim}
	
	\begin{verbatim}
	

        try {
	\end{verbatim}
	
	\begin{verbatim}
	

            done.getFuture().getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

        } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

        return a;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	
\begin{itemize}

\item{numThreads : This is the number of threads to run in parallel, hence the number of bands.}

\item{c0 and c1 : These are character arrays to find the LCSs.}

\item{a : This is the two-dimensional array used as previously described.}

\item{done : This is an accumulator used to detect when the threads have terminated.}

\end{itemize}
Class
\texttt{LCS}

\textbf{\texttt{}}
has two constructors. Both take three parameters: the two strings and the number of threads to use. In one of the constructors, the strings are represented as character arrays and in the other, as
\texttt{String}
objects.
\par
The constructor that takes character arrays is the one that does the actual work. It allocates the
\texttt{Accumulator}

\texttt{done}
to detect when the threads have terminated. It allocates the array
\texttt{a}
. The array's top and left edges are automatically initialized to zero when it is created. It finishes by creating one
\texttt{Band}
thread (see
\ref{ParLoops.xml#id(32339)}[MISSING HREF]
) for each vertical band. These
\texttt{Band}
threads are told the first and last column numbers they are to fill in and are given two semaphores. They
\texttt{down}
their
\texttt{left}
semaphore before starting a row and
\texttt{up}
their
\texttt{right}
semaphore after finishing a row. This is the extent of the producer-consumer synchronization they use.
\par
The calculation of the bounds for a band are performed by method
\texttt{startOfBand(i,nt,N)}
, which gives the starting column position of band number
\texttt{i}
, for
\texttt{nt}
threads and
\texttt{N}
columns. Band
\texttt{i}
,
\includegraphics{figures/ParLoops-17}
, extends from column
\texttt{startOfBand(i,nt,N)}
to
\texttt{startOfBand(i+1,nt,N)-1}
. The calculation is designed to guarantee that no band has more than one more column in it than another.
\par
Method
\texttt{getLength()}
will return the length of the LCS, and
\texttt{getArray()}
will return the array that the algorithm has filled in. Both of them must wait for the bands to finish computing before they can return.
\par

	%\begin{tabular}
	LCS class Band
	\begin{verbatim}
	

 private class Band extends Thread{
	\end{verbatim}
	
	\begin{verbatim}
	

        int low; int high; Semaphore left,right; 
	\end{verbatim}
	
	\begin{verbatim}
	

        Band(int low, int high, 
	\end{verbatim}
	
	\begin{verbatim}
	

            Semaphore left,Semaphore right){
	\end{verbatim}
	
	\begin{verbatim}
	

                this.low=low;this.high=high;
	\end{verbatim}
	
	\begin{verbatim}
	

                this.left=left;this.right=right;
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        public void run() {
	\end{verbatim}
	
	\begin{verbatim}
	

          try {
	\end{verbatim}
	
	\begin{verbatim}
	

            int i,j,k;
	\end{verbatim}
	
	\begin{verbatim}
	

            for (i=1;i<a.length;i++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                left.down();
	\end{verbatim}
	
	\begin{verbatim}
	

                for (j=low;j<=high;j++) {
	\end{verbatim}
	
	\begin{verbatim}
	

                    if(c0[i-1]==c1[j-1]) 
	\end{verbatim}
	
	\begin{verbatim}
	

                        a[i][j]=a[i-1][j-1]+1;
	\end{verbatim}
	
	\begin{verbatim}
	

                    else
	\end{verbatim}
	
	\begin{verbatim}
	

                        a[i][j]=Math.max(a[i-1][j],a[i][j-1]);
	\end{verbatim}
	
	\begin{verbatim}
	

                }
	\end{verbatim}
	
	\begin{verbatim}
	

                right.up();
	\end{verbatim}
	
	\begin{verbatim}
	

            }
	\end{verbatim}
	
	\begin{verbatim}
	

            done.signal();
	\end{verbatim}
	
	\begin{verbatim}
	

          } catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
\section{Example: Shell Sort}
Shell sort works by sorting all sequences of elements
\emph{h}
positions apart with insertion sort, then those elements a smaller number of positions apart, and so on with diminishing increments, until finally sorting the overall array with insertion sort. Since insertion sort on an already nearly sorted array runs quickly, the shell sort can be much faster than the insertion sort alone.
\par

\texttt{ShellsortDC}
, presented in the section entitled ``
\ref{../B/ParSubs.xml#id(29055)}[MISSING HREF]
,'' uses powers of two as the increments. This is not the best method possible, since it is not until the last pass that elements in even-numbered positions are compared with elements in odd-numbered positions. If all the largest elements are in, say, the even-numbered positions, and the smallest are in, say, the odd-numbered positions, the last pass will itself be of orderO(
\emph{N}

**2), moving odd-numbered elements past increasingly long sequences of larger elements.
\par
The shell sort works best if the array is sorted in several stages. Each stage sorts a number of nonoverlapping subsequences, but the subsequences from different stages do overlap. For example, one stage could sort all five sequences of elements five spaces apart, and then the next could sort all elements three spaces apart. This avoids the problem of whole sets of elements not being compared until the last step.
\par
One wonders, though, whether a new pass with a smaller increment may undo the work of a previous pass. The answer is no. Suppose that the array is first
\emph{h}
sorted (i.e., that all elements
\emph{h}
positions apart are sorted) and then it is
\emph{k}
sorted. After the end of the
\emph{k}
sort, it will still be
\emph{h}
sorted. Rather than do a formal proof, let's consider an example. Suppose a high school girls' athletic team of 12 members is being posed for a photograph. The photographer poses them in two rows of six. She wants them arranged from the shortest girl in the first row to her left up to the tallest girl on the second row to her right. This will be the girls' right and left, respectively. She wants a taller girl standing behind a shorter in each column and the rows ordered by ascending height. The well-known method is to have the girls get into the two rows and then (1) ask the girls in the second row to change places with the girl in front of her if the girl in front is taller, and then (2) ask the girls in the rows to repeatedly change places along the rows while the girl on the right is taller than the girl on her left. After these two passes, the girls are in proper order.
\par
Why doesn't pass (2) spoil the order from pass (1)? Consider a girl, Pat, fourth place from the right (her right) on the second row at the end of this sorting. Is it possible for the girl in front of her to be taller than she is? Pat has three girls to her right who are shorter than she is. Each of those girls had a girl in front of her in the first row at the end of step (1) who was shorter than she, and hence shorter than Pat. Pat had a girl in front of her who was shorter. That means that there are at least four girls in the front row who are shorter than Pat, so after step (2), at least the rightmost four positions in the first row must be occupied by girls shorter than Pat, so the girl in front of Pat must be shorter.
\par
The same principle applies to the passes in the shell sort.
\par
It is usual to give the shell sort an ascending series of increments hithat will be the distance apart of the elements to be sorted. The shell sort can start with the largest hismaller than the size of the array and go down the series sorting with each. Of course, h0is 1.
\par
When you are sorting elements hiapart, there are hiseries of those elements. The lowest-numbered element in series
\emph{j}
is element number
\emph{j}
,
\includegraphics{figures/ParLoops-18}
. The chore graph for the shell sort is shown in
\ref{ParLoops.xml#id(88837)}[MISSING HREF]
.
\par

\includegraphics{figures/ParLoops-19}
So what are good step sizes for a shell sort? Let's consider several sequences forh0,
\emph{h}
1,
\emph{h}
2,...,
\emph{h}
i, ...:
\ref{#id(pgfId-8194)}[MISSING HREF]

\par

\begin{itemize}

\item{What about}

\includegraphics{figures/ParLoops-20}
? This is a bad idea. There will be
\emph{N}
passes, and each pass (at least from
\includegraphics{figures/ParLoops-21}
down) will need to look at all
\emph{N}
elements. That gives an order of
\emph{O}
(
\emph{N}

**2) before considering how long the insertion sorts themselves take.
\end{itemize}

\begin{itemize}

\item{As previously mentioned, powers of two are not a good sequence, since the last pass must compare entire sequences of elements never compared before. The last pass alone is in the worst case of order O ( N 2 ).}

\item{The sequence 1, 3, 7, ..., 2 i +1 - 1,..., will give an order of O ( N 3/2 ). For a long time, the shell sort was thought to be of order O ( N 3/2 ).}

\item{Knuth recommended the sequence 1, 4, 13, 40, 121, ... ,where It has been shown empirically to do reasonably well.}

\end{itemize}

\includegraphics{figures/ParLoops-22}

\begin{itemize}

\item{Sedgewick found that the series 1, 8, 23, 77, ...,}

\includegraphics{figures/ParLoops-23}
gives the shell sort a running time of order
\emph{O}
(
\emph{N}

**4/3).
\end{itemize}

\begin{itemize}

\item{A series that gives an order of}

\includegraphics{figures/ParLoops-24}
behavior is obtained from the accompanying table, where the upper left corner is 1, each row is three times the preceding row, and each column is twice the column to its left. Choose the elements of the table smaller than
\emph{N}
, sort them, and go down the list. The reason for the (log
\emph{N}
)
**2is that there will be log2
\emph{N}
columns and log3
\emph{N}
rows with relevant entries, giving
\emph{O}
((log
\emph{N}
)
**2) passes. Each sorting pass will require only
\emph{N}
comparison, since an element will move at most one position. Why? Consider sorting an array that is already 2 and 3 sorted. An element is already in order with all elements an even number of positions away. It is in order with elements three positions away and then because of the 2 sorting, 5, 7, 9, ..., away. Indeed, the only elements it may be out of position with are those one position away. This translates to elements
\emph{k}
positions apart. If the array is 2
\emph{k}
sorted and 3
\emph{k}
sorted, then it can be
\emph{k}
sorted with elements moving at most one step and hence requiring only
\emph{O}
(
\emph{N}
) comparisons.
\end{itemize}

	%\begin{tabular}
	1248...361224...9183672...2754108216..................
	%\end{tabular}
	The problem with this is that the number of steps is too large, so the constant factor is too large for practical use.
\begin{itemize}

\item{A geometric series with a ratio of 2.2 does well. You can start off with, say, h = N /5, to sort series of length 5 and then repeatedly decrease h to}

\includegraphics{figures/ParLoops-25}
for the rest of the series, being careful to use an increment of one the last time through. This is the series in general use.
\end{itemize}

\subsection{ShellsortBarrier class}

\texttt{ShellsortBarrier.java}
in
\ref{ParLoops.xml#id(26881)}[MISSING HREF]
uses a fixed number of threads to sort an array. In early stages, there are more subsequences than threads, so the threads take several subsequences. In later stages, there may be more threads than subsequences, so some of the threads have no work to do.
\par

	%\begin{tabular}
	ShellsortBarrier overall structure.
	\begin{verbatim}
	

class ShellsortBarrier {
	\end{verbatim}
	
	\begin{verbatim}
	

  static int minDivisible=3;
	\end{verbatim}
	
	\begin{verbatim}
	

  int numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

 public ShellsortBarrier(int numThreads){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.numThreads=numThreads;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private class Sort implements Runnable {
	\end{verbatim}
	
	\begin{verbatim}
	

       //see Example 6-10
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

 static void isort(int[] a,int m,int h) {
	\end{verbatim}
	
	\begin{verbatim}
	

        int i,j;
	\end{verbatim}
	
	\begin{verbatim}
	

        for (j=m+h; j<a.length; j+=h) {
	\end{verbatim}
	
	\begin{verbatim}
	

            for (i=j; i>m && a[i]>a[i-h]; i-=h) {
	\end{verbatim}
	
	\begin{verbatim}
	

                int tmp=a[i];
	\end{verbatim}
	
	\begin{verbatim}
	

                a[i]=a[i-h];
	\end{verbatim}
	
	\begin{verbatim}
	

                a[i-h]=tmp;
	\end{verbatim}
	
	\begin{verbatim}
	

            }
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public void sort(int[] a) {
	\end{verbatim}
	
	\begin{verbatim}
	

        if (a.length<minDivisible) {
	\end{verbatim}
	
	\begin{verbatim}
	

                isort(a,0,1);
	\end{verbatim}
	
	\begin{verbatim}
	

                return;
	\end{verbatim}
	
	\begin{verbatim}
	

        }
	\end{verbatim}
	
	\begin{verbatim}
	

        SimpleBarrier b=new SimpleBarrier(numThreads);
	\end{verbatim}
	
	\begin{verbatim}
	

        for (int i=numThreads-1;i>0;i--) 
	\end{verbatim}
	
	\begin{verbatim}
	

            new Thread(
	\end{verbatim}
	
	\begin{verbatim}
	

               new Sort(a,i,a.length/minDivisible,b)).start();
	\end{verbatim}
	
	\begin{verbatim}
	

        new Sort(a,0,a.length/minDivisible,b).run();
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	
\texttt{ShellsortBarrier}
has this interface to the user:
\par

\begin{itemize}

\item{ShellsortBarrier(int n) : This is the constructor; it creates a ShellsortBarrier object that will sort arrays of integers using n threads. It saves n in field numThreads .}

\item{sort(int[] a) : This method sorts the integer array a . It uses numThreads threads to do the sorting unless the array is too short to justify creating the threads.}

\end{itemize}
Method
\texttt{sort(a)}
first checks the length of
\texttt{a}
to see if it is short enough to sort directly. If so, the method calls the static method
\texttt{isort()}
that performs the actual sorts.
\par
If there are enough elements in the array to justify creating threads,
\texttt{numThreads}

\texttt{Sort}
objects are created. Of these,
\texttt{numThreads-1}
are run as separate threads, and one is executed by the thread that called
\texttt{sort()}
. Method
\texttt{sort()}
gives these threads a
\texttt{SimpleBarrier}
object to synchronize with.
\par

\subsection{Class Sort}

\texttt{ShellsortBarrier}
's internal class
\texttt{Sort}
, shown in
\ref{ParLoops.xml#id(19469)}[MISSING HREF]
, handles the concurrent threads. Sort has the following constructor:
\par

\begin{verbatim}
Sort(int[]a, int i, int h, Simple Barrier b)
\end{verbatim}

	%\begin{tabular}
	ShellsortBarrier class Sort
	\begin{verbatim}
	

 private class Sort implements Runnable {
	\end{verbatim}
	
	\begin{verbatim}
	

   int[] a; int i, h; SimpleBarrier b; 
	\end{verbatim}
	
	\begin{verbatim}
	

   Sort(int[] a, int i, int h, SimpleBarrier b){
	\end{verbatim}
	
	\begin{verbatim}
	

        this.a=a; this.i=i; this.h=h; this.b=b;
	\end{verbatim}
	
	\begin{verbatim}
	

   }
	\end{verbatim}
	
	\begin{verbatim}
	

   public void run() {
	\end{verbatim}
	
	\begin{verbatim}
	

        try{ 
	\end{verbatim}
	
	\begin{verbatim}
	

          while (h>0) {
	\end{verbatim}
	
	\begin{verbatim}
	

            if (h==2) h=1;
	\end{verbatim}
	
	\begin{verbatim}
	

            for (int m=i; m<h; m+=numThreads) {
	\end{verbatim}
	
	\begin{verbatim}
	

                isort(a,i,h);
	\end{verbatim}
	
	\begin{verbatim}
	

            }
	\end{verbatim}
	
	\begin{verbatim}
	

            h=(int)(h/2.2);
	\end{verbatim}
	
	\begin{verbatim}
	

            b.gather();
	\end{verbatim}
	
	\begin{verbatim}
	

          }
	\end{verbatim}
	
	\begin{verbatim}
	

        }catch(Exception ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

   }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	The parameters to the constructor are the following:
\par

\begin{itemize}

\item{a : This is the array to sort.}

\item{i : This is the number of this thread.}

\item{h : This is the initial increment between elements in a sequence being sorted.}

\item{b : This is the barrier with which to synchronize.}

\end{itemize}
If the increment between elements in a sequence is
\texttt{h}
, then there are
\texttt{h}
such sequences beginning at array indices 0, 1, 2,...,
\texttt{h}
-1. If there are
\texttt{n}
threads, thread number
\texttt{i}
handles the sequences
\texttt{i}
,
\texttt{i+n}
,
\texttt{i+2n}
, ....
\par
All the
\texttt{Sort}
threads sort their sequences of elements for the initial value of
\texttt{h}
. Then, they all set their copy of the value of
\texttt{h}
to
\includegraphics{figures/ParLoops-26}
and repeat until the array is entirely sorted. Since they are all doing the same updates on
\texttt{h}
, they all get the same values. There is one trick in the calculations of
\texttt{h}
. They detect that they are done when
\texttt{h}
takes on the value zero, which it will after sorting for an increment of one. They check to see if they have gotten an increment of two, and if they have, they set it to one. If they didn't, after dividing 2/2.2, they would get
\texttt{h}
=0, skipping the last pass,
\texttt{h}
=1.
\par

\subsection{Performance}
The performance of the shell sort with barriers is shown in
\ref{ParLoops.xml#id(37379)}[MISSING HREF]
. Again, this was run on a dual-processor system, but unlike the implementations of Warshall's algorithm, the performance got better as more threads were used. In Warshall's algorithm, two threads, the same as the number of processors, was best.
\par

\includegraphics{figures/ParLoops-27}

\section{Chapter Wrap-up}
In this chapter, we explored how to execute loop iterations in parallel. Since, typically, most of a program's processing time is spent in a few, innermost loops, executing loops in parallel will often vastly speed up a program.
\par
Parallel loop execution is sometimes impossible; some loops must be executed sequentially. Sometimes, it is trivial, and every iteration is independent of the others, as are the two inner loops in Warshall's algorithm. Sometimes, it is in between, such as the iterations in the LCS algorithm, which need to be executed skewed.
\par
We presented an informal way of describing computational dependencies: chore graphs. Sketching chore graphs allows us to look for ways to organize the computation into parallel threads, while still maintaining dependencies.
\par
One of the most useful objects for coordinating loops is the barrier. Our
\texttt{SimpleBarrier}
class allows a number of threads to synchronize at the bottom of a sequential outer loop, as in Warshall's algorithm, or between stages of the computation, as in the shell sort.
\par

\section{Exercises}
1. How would trapezoidal integration (see the
\ref{../B/ParSubs.xml#id(52205)}[MISSING HREF]
) change if rephrased in terms of parallel loops?
\par
2. Could you use
\texttt{termination groups}
or
\texttt{accumulators}
instead of
\texttt{simple barriers}
?
\par
3. Redo the LCS algorithm using a
\texttt{BoundedBuffer}
object to pass values along rows between bands. This provides a model for distributed execution: The bands could be on different machines, and the values could be sent between the machines through sockets, as will be discussed later in Chapter 11 ``
\ref{../../gkt/E/networking.xml#id(32252)}[MISSING HREF]
.''
\par
4. A two-dimensional version of the Laplace equation works as follows: It updates the elements of a two-dimensional array (except for the border elements) by repeatedly assigning each element the average value of its orthogonal neighbors. (The orthogonal neighbors are the up, down, left, and right neighbors.) The border elements are set at constant values. The top, bottom, right, and left borders may have different values, but the elements along a border have the same value. The corners are irrelevant, since they aren't used to calculate any average. The process of updating elements continues until no element in the array changes by more than a small amount. (The ``small amount'' must be a large enough fraction of the magnitude of the border elements so that floating-point arithmetic can detect termination.)
\par
Different versions of the algorithm update the elements in different orders. One way updates the elements in two nested loops, processing them row-wise, left to right, and top to bottom. It updates an element using newer values for the up and the left neighbors than for the down and right neighbors, but that is okay. It makes it terminate more quickly.
\par
Implement this version using parallel threads similarly to the LCS algorithm. Divide the array into bands. Assign the bands to threads. Use semaphores to synchronize the threads. You will need two semaphores to communicate between adjacent bands: one to prevent the right thread from using the left neighbor along a row before it has been computed and the other to prevent the left thread from updating a right border element a second time before the right thread has used it. These semaphores should prevent threads from accessing the same array element at the same time.
\par
5. Another way to solve the Laplace equation (described in Exercise 4) is to imagine the array as a chessboard. Each element is either red or black, as are the squares on a chessboard. The algorithm alternately updates all the red squares and then all the black ones. All the red squares may be updated in parallel, and all the black squares may be updated in parallel.
\par
Implement this version.
\par
6. Write a method that when given an array of URL's of Web pages and a string, will search the Web pages in parallel for an occurrence of the string and return an array of the URL's that contained the string.
\par

\par

