\chapter{Thread and Chore Synchronization}

Chores have the same need for synchronization as
threads. We therefore provide versions of synchronization primitives that allow
both chores and threads to be synchronized. Here we will look at termination
groups, barriers, and accumulators to see how they interact with chores. We will
examine two parallel sorting algorithms as examples of their use. 

\section{TerminationGroup}

We discussed most of the methods of the
\texttt{TerminationGroup} interface in the section entitled 
\ref{parsubs.TerminationGroup}. We left the discussion of its 
\texttt{runDelayed()} method until now.


Recall that we create an initial, single member of a \texttt{termination group}.
Then, by calling the \texttt{fork()} method for any member, we
create other members. Eventually, we call \texttt{terminate()} for a member.
After a member has been terminated, no more members can be forked from it. When
all members of a \texttt{termination group} have terminated, 
the \texttt{termination group} terminates.

\begin{lstlisting}[caption="TerminationGroup Methods"]

void awaitTermination()

TerminationGroup fork()

void runDelayed(Runnable r)

void terminate()

\end{lstlisting}

The \texttt{TerminationGroup} interface extends
\texttt{RunDelayed()}, so chores can be run-delayed on a termination group.
Naturally, such chores are stored with the termination group until all members
of the termination group have been terminated, whereupon they are placed in a
run queue.

We use a \texttt{SharedTerminationGroup} for
shared-memory systems. A \texttt{SharedTerminationGroup} contains a
\texttt{Future} . It assigns null to the future when termination has occurred.
Naturally, \texttt{SharedTerminationGroup} delegates \texttt{runDelayed()} calls
to the future. 


The \texttt{SharedTerminationGroup} has two
constructors: \textbf{\texttt{SharedTerminationGroup(Future f)}}
SharedTerminationGroup(Future f) and \textbf{\texttt{SharedTerminationGroup()}}
SharedTerminationGroup(). You can either supply the future yourself, or you can
have the \texttt{SharedTerminationGroup} allocate one. If you supply one, you
can decide what run queue the chores will be placed in when the group
terminates. If you don't supply one, you will be choosing, by default, the run
queue associated with the \texttt{Future} class.

If you wish to create several shared termination
groups that will place chores in the same run queue, you can use a
\texttt{SharedTerminationGroupFactory} . Naturally, a
\texttt{SharedTerminationGroupFactory} generates \texttt{SharedTerminationGroup}
objects. Its constructor takes a \texttt{FutureFactory} object to generate the
futures it places in the termination groups. 

%\begin{tabular}




\textbf{\texttt{SharedTerminationGroupFactory}} SharedTerminationGroupFactory %
Label \label{pgfId-7060}

\textbf{\texttt{SharedTerminationGroupFactory(FutureFactory futureFactory)}}
SharedTerminationGroupFactory(FutureFactory futureFactory): Creates a
\texttt{SharedTerminationGroupFactory} object. 

\textbf{\texttt{SharedTerminationGroup make()}} SharedTerminationGroup make():
Creates a \texttt{SharedTerminationGroup}

\textbf{\texttt{}} object. 

\textbf{\texttt{FutureFactory getFutureFactory()}} FutureFactory
getFutureFactory(): Gets the \texttt{FutureFactory} object for a
\texttt{SharedTerminationGroupFactory} object. 

\textbf{\texttt{void setFutureFactory(FutureFactory futureFactory)}} void
setFutureFactory(FutureFactory futureFactory): Sets the \texttt{FutureFactory}
object for a \texttt{SharedTerminationGroupFactory} object. %\end{tabular}


\section{Barrier}

\texttt{Barrier} extends \texttt{SimpleBarrier} and implements
\texttt{RunDelayed}, so chores can wait at barriers, as well as can threads.
When a chore is run delayed at a barrier, it counts as a member of the group
being synchronized. Method \texttt{runDelayed()} counts as a \texttt{gather()} .


%\begin{tabular}




\begin{lstlisting}[caption="Barrier constructor and methods"]
Barrier(int n)
void gather()
void runDelayed(Runnable r)
void signal()
RunQueue getRunQueue()
void setRunQueue(RunQueue rq)
static RunQueue getClassRunQueue()
\end{lstlisting}

%\end{tabular}


 Being able to run delay runnables, as well as
gathering threads, makes barriers lightweight. If you do not have to pay the
cost of creating threads to execute a loop, you can more easily justify using
barriers and parallel loops in other parallel structures. For example, you could
have a reentrant server that uses a parallel loop. Normally, each call would
create a new set of threads, but with \texttt{runDelay()} , you can implement it
so that the runnables will share a run queue and not create more threads than
the run queue allows. 

 Of the group of objects gathering at a barrier, all
can be threads calling \texttt{gather()} , all can be chores placed at the
barrier by calls to \texttt{runDelayed()} , or there can be some of each. 

 and
\texttt{SimpleBarrier} is the method \texttt{signal()} . You may signal a
barrier, which has the same effect on the barrier as a call to \texttt{gather()}
, but the thread that calls \texttt{signal()} is not blocked. 

 ? It appeared to be useful
in some loops. Consider the loop represented by the chore graph in
ref{id(70546)}[MISSING HREF] . Chores A, B, C, D, and E execute
in the loop. Chores A and B execute at the start of the loop; D must execute
after A; E, after B; and C after both A and B. The edges leading upwards
represent dependencies between iterations. Before beginning the second loop, A
must wait for C and D to be done with the previous iteration; B must wait for C
and E. 

% Beginning of DIV

\includegraphics{figures/ChoreGroups-1}

% End of DIV

 We can synchronize this loop with barriers, as shown
in ref{id(94735)}[MISSING HREF] . We will group A and D into
one thread, B and E into another, and put C into a thread of its own. Although
AD and BE are not required to be synchronized with each other, we will gather
them at a barrier at the top of the loop. C, however, represents a bit of a
problem. It needs to complete one iteration before AD and BE can continue with
the next, but C must wait for chores A and B to complete each time around the
loop, so whatever synchronization object C waits on, it must reset each
iteration. 

% Beginning of DIV

\includegraphics{figures/ChoreGroups-2}

% End of DIV

 One option would have AD, BE, and C gather at a
barrier at the top of the loop, and then C could down a semaphore twice. As A
and B complete, they up the semaphore. This would cause C to potentially block
three times in rapid succession without doing any useful work in between. 

 The solution we show uses two barriers, both
requiring three threads. Both AD and BE gather at barrier B1. C signals B1,
without blocking there. C gathers at barrier B2. When chore A completes, before
starting chore D, thread AD signals B2. Similarly, BE signals B2 when chore B
completes. 

 So this design allows C to only block once. The
\texttt{signal()} operation allows barriers to be used for internal
synchronization within loop iterations without unnecessary blocking. 

\section{BarrierFactory}





 If you want to synchronize runnables at barriers with
\texttt{runDelayed()} , you may want to use a \texttt{BarrierFactory} object to
create the barriers. The \texttt{BarrierFactory} object is given a run queue
when it is created. Each barrier it constructs will use that run queue for the
run-delayed objects. You can set the parameters of the run queue to limit the
number of threads that will run them. 

%\begin{tabular}




\textbf{\texttt{BarrierFactory}} BarrierFactoryconstructor and methods % Label
\label{pgfId-11161}

\texttt{public BarrierFactory(RunQueue rq)}





\texttt{public RunQueue getRunQueue()}





\texttt{public void setRunQueue(RunQueue rq)}





\texttt{public Barrier make(int n)}



\begin{verbatim}
\end{verbatim}

%\end{tabular}


 object allows you to get
and set the \texttt{RunQueue} object after it is created. 

 to get a new barrier that
will gather \texttt{n} threads. 

\section{AccumulatorFactory}





 Accumulators have already been discussed in the
section entitled `` \ref{../B/ParSubs.xml#id(36860)}[MISSING HREF] '' in
Chapter 5. Accumulators, recall, assign a value to a future when enough signals
have accumulated. Class \texttt{Accumulator} implements \texttt{RunDelayed} by
simply delegating it to the future. 

%\begin{tabular}




\textbf{\texttt{AccumulatorFactory}} AccumulatorFactoryconstructor and methods %
Label \label{pgfId-11253}

\texttt{public AccumulatorFactory(FutureFactory futureFactory)}





\texttt{public FutureFactory getFutureFactory()}





\texttt{public void setFutureFactory(FutureFactory futureFactory)}





\texttt{public Accumulator make(int n)}





\texttt{public Accumulator make(int n, Object data)}



\begin{verbatim}
\end{verbatim}

%\end{tabular}


 object can be used to
allocate accumulators. It uses a \texttt{FutureFactory} object to create the
futures they contain. The \texttt{FutureFactory} object, in turn, allows you to
set the parameters on the run queues the futures place their run-delayed
runnables on. You can get and set the future factory in an already existing
accumulator factory by the obvious calls. 

 object by
calling one of the \texttt{make()} methods: 



\texttt{Accumulator make( n)}



\texttt{Accumulator make( n, data)}

 In the call, you must specify the number of signals
the accumulator is waiting on. You may also specify the initial data value the
accumulator contains. If you don't specify another data value, the value will
be null. 

\section{Parallel Quicksort}





 Quicksort is easy to run in parallel. Quicksort
works by picking an arbitrary element in an array to be the pivot element. The
algorithm partitions the array around the pivot, moving the elements so that all
elements to the left of the pivot element are less than or equal to it, and all
the elements to the right are greater than or equal to it. Then, each side of
the array can be sorted independently in the same manner. In the conventional
implementation, each side is sorted recursively. In a parallel implementation,
both sides can be sorted by parallel threads. 

\begin{lstlisting}[caption="Overall structure of ParQuickSort2"]
public class ParQuickSort2 {
  int numThreads;
  int minDivisible=8;

    class QuickSortThread2 implements Runnable{
      int ary[],m,n; TerminationGroup tg; RunQueue rq;
      public QuickSortThread2(int ary[],int mm, int nn,
                              TerminationGroup t, RunQueue rq) {
        this.ary=ary; m=mm; n=nn; tg=t; this.rq=rq;
      }

      public void run() {
        quicksort(m,n);
        tg.terminate();
      }

      void quicksort(int m, int n) {...}
      
    }
    
    
    public ParQuickSort2(int numThreads) {
      this.numThreads=numThreads;
    }
      
    public void sort(int[] ary) {...}
}
\end{lstlisting}

%\end{tabular}



\begin{lstlisting}[caption="Method sort() of ParQuicksort2"]
 public void sort(int[] ary) {
    int N=ary.length;
    //System.out.println("sort()");
    TerminationGroup terminationGroup;
    RunQueue rq=new RunQueue();
    FutureFactory ff=new FutureFactory(rq);
    TerminationGroupFactory tgf=new SharedTerminationGroupFactory(ff);
    Runnable subsort;
    rq.setMaxThreadsCreated(numThreads);
        terminationGroup=tgf.make();
        subsort=new QuickSortThread2(ary,0,N,terminationGroup,rq);
        rq.run(subsort);
        try {
          terminationGroup.awaitTermination();
        }catch(InterruptedException e){}
    rq.setMaxThreadsWaiting(0);
 }
\end{lstlisting}



\begin{lstlisting}[caption="Method quicksort() of QuickSortThread2"]
   void quicksort(int m, int n) {
        int i,j,pivot,tmp;
        if (n-m<minDivisible) {
            for (j=m+1;j<n;j++) {
                for (i=j;i>m && ary[i]<ary[i-1];i--) {
                        tmp=ary[i];
                        ary[i]=ary[i-1];
                        ary[i-1]=tmp;
                }
            }
            return;
        }
        i=m;
        j=n;
        pivot=ary[i];
        while(i<j) {
                j--;
                while (pivot<ary[j]) j--;
                if (j<=i) break;
                        tmp=ary[i];
                        ary[i]=ary[j];
                        ary[j]=tmp;
                i++;
                while (pivot>ary[i]) i++;
                        tmp=ary[i];
                        ary[i]=ary[j];
                        ary[j]=tmp;
        }
        Runnable subsort;
        if (i-m > n-i) {
                subsort=new QuickSortThread2(ary,m,i,tg.fork(),rq);
                rq.run(subsort);
                quicksort(i+1,n);
        } else {
                subsort=new QuickSortThread2(ary,i+1,n,tg.fork(),rq);
                rq.run(subsort);
                quicksort(m,i);
        }
   }

\end{lstlisting}

%\end{tabular}


 How fast can a parallel implementation of quicksort
run? A sequential version of quicksort runs on the average in O( \emph{N} log
\emph{N} ) time, where \emph{N} is the size of the array.
\ref{#id(pgfId-11940)}[MISSING HREF] The parallel version can never be better
than O( \emph{N} ) time. The first partition of the array will look at every
element and therefore take N time. (If you can think of some way to partition
the array in parallel, you could make it run faster.) 

 Our implementation of quicksort,
\texttt{ParQuickSort2} , \ref{#id(pgfId-11955)}[MISSING HREF] uses chores for
subsorts. These chores partition the array and then create other chores for the
recursive calls. The chores use a \texttt{TerminationGroup} object to report
when the sort is done. 

 is
shown in ref{id(21413)}[MISSING HREF] . An instance of
\texttt{ParQuickSort2} is created with a maximum number of threads that may be
used in a sort. The method \texttt{sort()} is called in this object to sort an
array. The internal class QuickSortThread2 contains the chore that actually
perform the sorts. 

\subsection{Method sort()}



 , shown in
ref{id(75214)}[MISSING HREF] , begins with a number of
statements just to set up the sort. It has to create a run queue for the sorting
chores to be placed on. It needs a termination group for the chores to use to
report when they are done. It is a bit of overkill, since we don't need to
create the termination group factory or the future factory. The actual sorting
is done by the following two lines: 

\begin{verbatim}
subsort=new QuickSortThread2(ary,0,N,terminationGroup,rq);
rq.run(subsort);
\end{verbatim}

 The QuickSortThread2 object is told the array to
sort, the bounds within which it is to sort, the termination group to signal
when it is done, and the run queue to run subsorts on. 

 We run the subsort by placing it in the run queue
\texttt{rq} . We could have run it ourselves by calling \texttt{subsort.run()} .
Anyway, we wait for the sort to finish by calling
\texttt{terminationGroup.awaitTermination()} , which we had to put in a try
statement, since it could, theoretically, throw an \texttt{InterruptedException}
. 

\subsection{QuickSortThread2}



 As can be seen from
ref{id(21413)}[MISSING HREF] , the real work in
\texttt{QuickSortThread2} is done in method \texttt{quicksort()} .
\texttt{QuickSortThread2} 's constructor places its parameters in fields of
the object. The \texttt{run()} method calls \texttt{quicksort()} , signals
termination, and returns. 

\subsection{Method quicksort()}



 has three major parts.


\begin{enumerate}

\item{If the portion of the array is small enough, it sorts it with insertion
sort, which is faster for small arrays than quicksort.}

\item{If the portion of the array isn't small enough for the insertion sort,
quicksort partitions it. It selects the first element in the subarray as the
pivot element. It moves j downwards from the top of the subarray past elements
larger than the pivot and i upwards from the bottom past elements smaller. The
pivot element is bounced back and forth between ary[i] and ary[j] .}

\item{After partitioning, it creates a runnable to sort one side of the array,
and it recursively sorts the other side. It chooses to sort the smaller side
itself. This is an attempt to control the grain size by creating fewer
runnables, but with larger subarrays to sort.}

\end{enumerate}

\section{Shell Sort}





 We discussed the shell sort in the section entitled
`` \ref{../B/ParSubs.xml#id(29055)}[MISSING HREF] '' in Chapter 5 and ``
\ref{../C/ParLoops.xml#id(61448)}[MISSING HREF] '' in Chapter 6. Remember that
the Shell sort is an improvement on the insertion sort. The insertion sort runs
much faster if the array is already in nearly sorted order. Shell sort uses a
series of increasing increments \emph{hi} . It goes down the series from larger
increments to smaller ones, sorting the elements in the array that are \emph{hi}
apart, sorting those \emph{hi} -1apart, and so on. Increment \emph{h} 0is one,
so the last pass is a straight insertion sort. The idea is that sorting with the
larger increments first quickly puts the elements close to the ultimate
positions and limits how far they will have to move in subsequent sorts. 

%\begin{tabular}
\begin{lstlisting}[caption="Shellsort6"]

package info.jhpc.textbook.chapter08;

import info.jhpc.thread.*;

public class ShellSort6 {
   int numThreads = 8;

   int minDivisible = 16;

   class SortPass implements Runnable {
      int ary[], i, k, n;

      Accumulator finish;

      SortPass(int ary[], int i, int k, int n, RunDelayed start,
            Accumulator finish) {
         this.ary = ary;
         this.i = i;
         this.k = k;
         this.n = n;
         this.finish = finish;
         start.runDelayed(this);
      }

      public void run() {
         isort(ary, i, k, n);
         finish.signal();
      }
   }

   class IMerge implements Runnable {
      int ary[], i, k, m, n;

      Accumulator finish;

      IMerge(int ary[], int i, int k, int m, int n, RunDelayed start,
            Accumulator finish) {
         this.ary = ary;
         this.i = i;
         this.k = k;
         this.m = m;
         this.n = n;
         this.finish = finish;
         start.runDelayed(this);
      }

      public void run() {
         imerge(ary, i, k, m, n);
         finish.signal();
      }
   }

   int numInSequence(int i, int k, int n) {
      return (n - i + k - 1) / k;
   }

   int midpoint(int i, int k, int n) {
      return i + numInSequence(i, k, n) / 2 * k;
   }

   void setupSequence(int ary[], int i, int k, int n, RunDelayed start,
         Accumulator finish, AccumulatorFactory af) {
      if (numInSequence(i, k, n) <= minDivisible)
         new SortPass(ary, i, k, n, start, finish);
      else {
         Accumulator a = af.make(2);
         int m = midpoint(i, k, n);
         setupSequence(ary, i, k, m, start, a, af);
         setupSequence(ary, m, k, n, start, a, af);
         new IMerge(ary, i, k, m, n, a, finish);
      }
   }

   Accumulator setupPass(int ary[], RunDelayed start, int k,
         AccumulatorFactory af) {
      Accumulator finish = af.make(k);
      for (int i = 0; i < k; i++) {
         setupSequence(ary, i, k, ary.length, start, finish, af);
      }
      return finish;
   }

   public ShellSort6(int numThreads) {
      this.numThreads = numThreads;
   }

   public void sort(int a[]) {
      int N = a.length;
      if (N < minDivisible) {
         isort(a, 0, 1, N);
         return;
      }
      RunQueue rq = new RunQueue();
      rq.setMaxThreadsCreated(numThreads);
      FutureFactory ff = new FutureFactory(rq);
      AccumulatorFactory af = new AccumulatorFactory(ff);
      Accumulator waitFor = af.make(1);
      waitFor.signal();
      int k, m;
      k = N / 5;
      waitFor = setupPass(a, waitFor, k, af);
      k = N / 7;
      waitFor = setupPass(a, waitFor, k, af);
      for (k = (int) (k / 2.2); k > 0; k = (int) (k / 2.2)) {
         if (k == 2)
            k = 1;
         waitFor = setupPass(a, waitFor, k, af);
      }
      try {
         waitFor.getFuture().getValue();
      } catch (InterruptedException ie) {
      }
      ff.getRunQueue().setMaxThreadsWaiting(0);
   }

   void isort(int[] a, int m, int k, int n) {
      int i, j;
      for (j = m + k; j < n; j += k) {
         for (i = j; i >= m + k && a[i] < a[i - k]; i -= k) {
            int tmp = a[i];
            a[i] = a[i - k];
            a[i - k] = tmp;
         }
      }
   }

   void imerge(int[] a, int m, int k, int mid, int n) {
      int i, j;
      for (j = mid; j < n; j += k) {
         if (a[j] >= a[j - k])
            return;
         for (i = j; i >= m + k && a[i] < a[i - k]; i -= k) {
            int tmp = a[i];
            a[i] = a[i - k];
            a[i - k] = tmp;
         }
      }
   }

   public static class Test1 {
      public static void main(String[] args) {
         int[] a = new int[25];
         int i;
         for (i = a.length - 1; i >= 0; i--) {
            a[i] = (int) (Math.random() * 100);
         }
         for (i = 0; i < a.length - 1; i++) {
            System.out.print(" " + a[i]);
         }
         System.out.println();
         ShellSort6 s = new ShellSort6(3);
         s.sort(a);
         for (i = 0; i < a.length - 1; i++) {
            System.out.print(" " + a[i]);
         }
         System.out.println();
      }
   }

   public static class TestTime1 {
      public static void main(String[] args) {
         if (args.length < 2) {
            System.out.println("Usage: java ShellSort6$TestTime1 n nt");
            System.exit(0);
         }
         int N = Integer.parseInt(args[0]);
         int T = Integer.parseInt(args[1]);
         int[] a = new int[N];
         int i;
         long time;
         for (i = a.length - 1; i >= 0; i--) {
            a[i] = (int) (Math.random() * N);
         }
         // for (i=a.length-1;i>=0;i--) {
         // System.out.print(" "+a[i]);
         // }
         // System.out.println();
         ShellSort6 s = new ShellSort6(T);
         time = System.currentTimeMillis();
         s.sort(a);
         time = System.currentTimeMillis() - time;
         // for (i=a.length-1;i>=0;i--) {
         // System.out.print(" "+a[i]);
         // }
         // System.out.println();
         System.out.println("ShellSort6\t" + N + "\t" + T + "\t" + time);
      }
   }

}

\end{lstlisting}



 , there are
\emph{hi} sequences of elements, the sequences whose lowest positions are 0, 1,
..., \emph{hi} -1. When \emph{hi} is large, the parallelism is good. There are
plenty of sequences, and they have a small enough grain size; that is, they are
not so long as to keep threads waiting for one or two long computations to
complete. 

 gets smaller, however, the number of
sequences decreases, and their length increases. This can do bad things to the
amount of parallelism available. We want more parallelism late in the sort.
Here's how we get it: 

 We can break down a long sequence of elements into
two halves and sort them separately in parallel. Then, we merge the two
sequences into one sorted sequence. How? 

 We merge the two halves by a small variant of
insertion sort itself. We take the bottom element of the upper half and move it
down to its proper place in the lower half; then, we take the next element from
the upper half and do the same. We continue until the next element from the
upper half doesn't move. 

 If the array wasn't almost sorted, there
wouldn't be any advantage to this method, but since the array is in
close-to-correct order already, the merge will not have to move many elements
and will not have to move them very far. 

 Naturally, we can apply this recursively. A long
sequence can be broken into two parts to be sorted separately and then merged,
each of those two parts can be broken into two parts, and so on. Consider the
effect on the last pass with an increment of one. Suppose that the array is
large and that you have four processors. Since the array is large, it can be
broken into a large number of parts. These parts can be sorted in parallel.
Then, there will be merges that can be done in parallel, and after that, more
merges. Instead of having to wait for a single processor to examine each element
in the array, the other processors can be busy working on parts of it until very
near the end. Finally, the number of merges will fall beneath the number of
processors, but the amount of work they have left to do will be small. 

 The overall structure and some of the detail of this
parallel implementation, \texttt{Shellsort6} , is shown in
ref{id(20173)}[MISSING HREF] . It contains two runnable
subclasses, \texttt{SortPass} and \texttt{IMerge} , to handle sorting and
merging sequences in parallel. 

 The purposes of methods numInSequence and midpoint
are obvious from their names. Method numInSequence() computes the number of
elements in a sequence from \texttt{i} up to, but not including, \texttt{n} ,
with an increment of \texttt{k} . Method \texttt{midpoint()} gives the index of
an element near the center of that sequence. 

 starts a parallel pass
over the array. For the step size \texttt{k} , it will arrange to sort the
\texttt{k} subsequences in parallel. It uses method \texttt{setupSequence()} to
arrange to sort the sequence recursively. Method \texttt{setupSequence()} breaks
the sequence into fragments to sort and builds a tree to merge them. 

 The real work of the algorithm is done by methods
\texttt{isort()} and \texttt{imerge()} , shown in
ref{id(78263)}[MISSING HREF] . The parameters to both have the
same names: They work on array \texttt{a} from position \texttt{m} up to, but
not including, position \texttt{n} with an increment \texttt{k} between
elements. In \texttt{imerge} , parameter \texttt{mid} is the position of the
lowest element in the upper side of the sequence. Method \texttt{isort()} is an
insertion sort, and \texttt{imerge()} is based on insertion sort, so they should
be clear in themselves. \ref{#id(pgfId-12478)}[MISSING HREF]



 , shown in
ref{id(11673)}[MISSING HREF] , is an encapsulation of method
\texttt{isort()} . Its job is to call \texttt{isort()} and then signal an
accumulator when it is done. It is created with the parameters it is to pass to
\texttt{isort()} and with two extra parameters it uses for scheduling,
\texttt{start} and \texttt{finish} . Parameter \texttt{start} indicates a
condition the \texttt{SortPass} object is to wait for before running. Parameter
\texttt{finish} is the accumulator it needs to signal when it is done. The
constructor for \texttt{SortPass} itself calls \texttt{start.runDelay(this)} to
schedule itself to run when \texttt{start} indicates it should. What will the
start condition be? \texttt{SortPass} objects are given accumulators as their
start objects. When they execute \texttt{runDelayed()} on the accumulator, they
will be delayed until the accumulator has received the correct number of
signals. These accumulators will indicate the previous pass of the sorting
algorithm is complete. 

 what
\texttt{SortPass} is to \texttt{isort()} . It will wait until a start condition
is met, then run, and finally signal its completion. The start condition that it
waits for by calling \texttt{runDelayed()} will actually be an accumulator. The
accumulator will indicate that sorting of the two subsequences is complete and
they now may be merged. 

 , shown in
ref{id(79888)}[MISSING HREF] , sets up a pass of the shell
sort. By pass, we mean a sort of all the \emph{hi} subsequences with increments
\emph{hi} . The parameter \emph{k} is used to indicate the increment to be used.
Parameter \texttt{start} indicates what the chores in the pass must wait on
before starting. It will be an accumulator that will count the completion of
chores in the previous pass and indicate when they are done. Method
\texttt{setupPass()} creates an accumulator of its own to count the completion
of its subsorts and returns it to the caller. 

 , shown in
ref{id(79888)}[MISSING HREF] , sets up the sort of a
subsequence of the array. If the subsequence is short enough, it simply creates
a \texttt{SortPass} object to handle it. Otherwise, it breaks the sequence down
into two parts, calls itself recursively to set up chore graphs to sort both
sides, and creates an \texttt{IMerge} object to merge them. It creates an
accumulator with a count of two, which it passes to itself to indicate when the
sides are done. It passes this accumulator to the \texttt{IMerge} as its start
condition, so the \texttt{IMerge} object will be scheduled when both sides are
sorted. 

 , shown in
ref{id(86035)}[MISSING HREF] , in a \texttt{Shellsort6} object
sorts an array. Method \texttt{sort()} starts by creating a run queue with a
restriction on how many threads it can create, a future factory using the run
queue, and an accumulator factory using the future factory. 

 uses variable
\texttt{waitFor} to hold the accumulators that signal the completion of passes.
Initially, it creates an accumulator and triggers it itself. It passes this to
\texttt{setupPass()} to allow the first pass to begin immediately. Each call to
\texttt{setupPass()} returns the next value for \texttt{waitFor} , the
accumulator that will signal the completion of the pass. 

 The first two passes use sequences of approximately
five and seven elements, respectively. This was chosen at the whim of the
implementer without any research supporting it as a good decision. Subsequent
passes use increments approximately 1/2.2 as large as the previous. The last
increment is one, of course. The loop skips the increment i.e., two and goes
directly to one because 2/2.2 would go directly to zero, skipping an increment
of one. 


simply waits for the accumulator \texttt{waitFor} to be triggered, indicating
that the computation is over. 

 ,
and \texttt{setupSequence()} do not do any of the actual computation themselves.
They set up the chore graph, and the chores do the computation. Moreover, the
chore graph starts executing before it is completely set up. 

 is shown in
ref{id(62858)}[MISSING HREF] . As with the other charts, the
results are for a Java system with kernel threads on a dual-processor system. In
this case, two threads, the same as the number of processors, performed best.


% Beginning of DIV

\includegraphics{figures/ChoreGroups-3}

% End of DIV

 is the
large number of \texttt{SortPass} and \texttt{IMerge} objects it creates. Just
as we have been reusing threads by the use of run queues, we can reuse these
\texttt{SortPass} and \texttt{IMerge} objects. We implemented
\texttt{ShellSort6F} , a version of \texttt{ShellSort6} that uses factories to
create \texttt{SortPass} and \texttt{IMerge} objects. These factories provide
recycling. When a \texttt{SortPass} or \texttt{IMerge} object comes to the end
of its \texttt{run()} method, it passes itself to the \texttt{recycle()} method
of its factory. The factory saves the object on a stack. When called to allocate
an object, the factory reuses an object from the stack, or if the stack is
empty, it allocates a new object. Recycling is much like explicit deallocation
of storage in C or C++. You might worry that a recycled object may still be in
use, just as explicitly freed objects may be, introducing obscure bugs when the
same storage is being reused as two different objects. It is not a problem in
this case, since, when \texttt{SortPass} and \texttt{IMerge} objects come to the
end of their \texttt{run()} methods, they will be abandoned. 

 is shown in
ref{id(12936)}[MISSING HREF] . Again, two threads performed
best. In ref{id(11723)}[MISSING HREF] , we compare the best
series for the shell sort using a barrier (see
\ref{../C/ParLoops.xml#id(37379)}[MISSING HREF] ), \texttt{ShellSort6} and
\texttt{ShellSort6F} . \texttt{ShellsortBarrier} is comparable to
\texttt{ShellSort6} , although at larger array sizes, \texttt{ShellSort6}
appears to be gaining the advantage. \texttt{ShellSort6F} 's speed is about
five times the speed of \texttt{ShellSort6} . (That is, it runs in about 1/5 the
time.) It's clear that recycling pays. 









% Beginning of DIV

\includegraphics{figures/ChoreGroups-4}

% End of DIV

% Beginning of DIV

\includegraphics{figures/ChoreGroups-5}

% End of DIV

\section{Chapter Wrap-up}



 In this chapter, we continued to explore programming
with chores. We looked at some more synchronization classes in the thread
package that implements \texttt{RunDelayed} : barriers, accumulators, and shared
termination groups. All have corresponding factories that can create
synchronization objects that use specified run queues for the delayed runnables.




\par
