\chapter{Parallel Execution of Subroutines in Shared Memory}
Using parallelism solves problems more quickly than using a
single-processor machine, as is the case where groups of people solve
problems larger than one person can solve. But just as with groups of
people, there are additional costs and problems involved in
coordinating parallel processors:

\begin{itemize}
\item We need to have more than one processor work on the problem at
  the same time. Our machine must have more than one processor, and the
  operating system must be able to give more than one processor to our
  program at the same time. Kernel threads allow this in Java. An
  alternative approach is to have several networked computers work on
  parts of the problem; this is discussed in Chapters 11 and 12,
  ``Networking'' and ``Coordination.'' %TODO refs for chapters 11 and 12?

\item We need to assign parts of the problem to threads. This at least
  requires rewriting a sequential program. It usually requires
  rethinking the algorithm as well.

\item We need to coordinate the threads so they perform their
  operations in the proper order, as well as avoid race conditions and
  deadlocks. A number of useful facilities are not provided by the
  standard Java language package. We provide a good collection for your
  use in our thread package.

\item We need to maintain a reasonable grain size. Grain size refers
  to the amount of work a thread does between communications or
  synchronizations. Fine grain uses very few instructions between
  synchronizations; coarse grain uses a large amount of work. Too fine a
  grain wastes too much overhead creating and synchronizing threads. Too
  coarse a grain results in load imbalance and the underutilization of
  processors.
\end{itemize}

Two easy, practical approaches to dividing the work among several
processors are executing subroutines in parallel and executing
iterations of loops in parallel. Parallelizing loops will be presented
in the next chapter. In this chapter we will discuss running
subroutines in parallel.

Executing subroutines in parallel is an easy way to speed up
computation. The chunks of code are already packaged for you in
methods; you merely need to wrap runnable classes around them. Of
course, there are certain requirements:

\begin{itemize}
\item The subroutines must be able to run in parallel with some other
  computation. This usually means that there are several subroutine
  calls that can run independently.

\item The subroutines must have a reasonable grain size. It costs a
  lot to get a thread running, and it doesn't pay off for only a few
  instructions.
\end{itemize}


Two kinds of algorithms particularly adaptable to parallel execution
of subroutines are the divide-and-conquer and branch-and-bound
algorithms. Divide-and-conquer algorithms break large problems into
parts and solve the parts independently. Parts that are small enough
are solved simply as special cases. You must know how to break a large
problem into parts that can be solved independently and whose
solutions can be reassembled into a solution of the overall
problem. The algorithm may undergo some cost in breaking the problem
into subparts or in assembling the solutions.

Branch-and-bound algorithms use is a search technique that we will
look at later in this chapter.


\section{Creating and Joining}
The obvious way to run subroutines in parallel is to create a thread
to run the subroutine, run the thread, and later wait for it to
terminate via \lstinline'join()' .
 

If the subroutine doesn't return a value, \lstinline'join()' alone
is adequate, but if it is to return a value, there is the question,
``How should a thread return a result?'' One easy option is to assign
it to a public field of the thread object. After the call to
\lstinline'join()' , the caller just extracts the result from the
subthread object.


%TODO the labelling of this as a subsubsection may be incorrect
% we may want to have principles, etc underneath 'Example'

\subsubsection{Example: Trapezoidal Numeric Integration}
Sometimes, a program needs to integrate a function (i.e., calculate
the area under a curve). It might be able to use a formula for the
integral, but doing so isn't always convenient, or even possible. An
easy alternative approach is to approximate the curve with straight
line segments and calculate an estimate of the area from them.


\subsection{Principles}

\ref{ParSubs.xml#id(46574)}[MISSING HREF] shows the situation. We wish
to find the area under the curve from \emph{a} to \emph{b} . We
approximate the function by dividing the domain from \emph{a} to
\emph{b} into \emph{g} equally sized segments. Each segment is (
\emph{b - a} )/ \emph{g} long. Let the boundaries of these segments be
\emph{x} 0 = \emph{a} , \emph{x} 1, \emph{x} 2, ... , \emph{x} g =
\emph{b} . The polyline approximating the curve will have coordinates
( \emph{x} 0, \emph{f} ( \emph{x} 0)), ( \emph{x} 1,f( \emph{x1} )), (
\emph{x} 2, \emph{f} ( \emph{x} 2)), ..., ( \emph{xg} , \emph{f} (
\emph{xg} )).


\includegraphics{figures/ParSubs-1}

This allows us to approximate the area under the curve as the sum of
\emph{g} trapezoids. The \emph{i} th trapezoid ( \emph{i} =1,...,
\emph{g} ) has corners ( \emph{xi} -1,0), ( \emph{xi} -1, \emph{f} (
\emph{xi} -1)), ( \emph{xi} , \emph{f} ( \emph{xi} )), and ( \emph{xi}
,0). The area of each trapezoid is given by the formula
\includegraphics{figures/ParSubs-2} .


The area under the curve is approximated by the sum of all the trapezoids:


\includegraphics{figures/ParSubs-3}

If we apply that formula unthinkingly, we will evaluate the function
twice for each value of \emph{x,} except the first and the last
values. A little manipulation gives us


\includegraphics{figures/ParSubs-4}

\subsection{Interface}
We allow the integrand to be passed in as a parameter. Since Java
doesn't have function closures, we use an object that implements the
\lstinline'F\_of\_x' interface shown in
\ref{ParSubs.xml#id(37191)}[MISSING HREF] .



%\begin{tabular}
F\_of\_x interface
\begin{verbatim}
public interface F_of_x {
  public double f( double x );
}
\end{verbatim}

create an instance of class \lstinline'IntegTrap1', shown in
\ref{ParSubs.xml#id(43799)}[MISSING HREF] , passing its constructor
numThreads, the number of threads you wish to use, and granularity,
the granularity for each thread. Each thread will be given an equal
part of the region over which to integrate the curve. This granularity
is the number of trapezoids each thread is to use in its region. The
total number of trapezoids is, therefore, \lstinline'numThreads' x
\lstinline'granularity' . The reason we refer to the number of trapezoids
as granularity is that it determines the computational granularity:
The number of computations a thread performs before synchronizing with
the calling thread is a linear function of the number of trapezoids it
calculates.



Interface of class IntegTrap1
\begin{verbatim}
         public IntegTrap1( int numThreads, int granularity )
         public double integrate(double a,double b, F_of_x fn)
\end{verbatim}

To perform the integration of a function from a to b, call the method
\lstinline'integrate()' of the \lstinline'IntegTrap1' object. Any number of
integrations can be performed, one at a time or concurrently, using
the same \lstinline'IntegTrap1' object to specify the number of threads
and the granularity. Each of these integrations will create the number
of threads specified.


\subsection{Code}
The actual work of integration is done in the class
\lstinline'IntegTrap1Region' , an extension of \lstinline'Thread', shown in
\ref{ParSubs.xml#id(24397)}[MISSING HREF] . An instance is created
with a region to integrate over ( \lstinline'x\_start' to
\lstinline'x\_end)' , a \lstinline'granularity' , and a function \lstinline'f'
to integrate. When the \lstinline'IntegTrap1Region' thread runs, it
calculates the area it is responsible for using the formula previously
derived and places the result in field areaOfRegion. The value of
areaOfRegion can be fetched using the method getArea().


Class IntegTrap1Region
\begin{verbatim}
         class IntegTrap1Region extends Thread {
         private double x_start, x_end;
         private int granularity;
         private double areaOfRegion = 0;
         private F_of_x f;
         public IntegTrap1Region( double x_start, double x_end, 
         int granularity, F_of_x f ) {
         super( new String( x_start + "-" + x_end ) );
         this.x_start = x_start;
         this.x_end = x_end;
         this.granularity = granularity;
         this.f = f;
         }
         public void run() {
         double area = 0.0d;
         double range = x_end - x_start;
         double g=granularity;
         for( int i=granularity-1; i>0; i-- ) {
         area += f.f((i/g)*range+x_start);
         }
         area += (f.f(x_start)+f.f(x_end))/2.0;
         area = area*(range/g);
         areaOfRegion = area;
         }
         public double getArea() {
         return areaOfRegion;
         }
         }
\end{verbatim}

The \lstinline'integrate()' method is shown in
\ref{ParSubs.xml#id(14859)}[MISSING HREF] . Essentially, it calls
\lstinline'numThreads' subroutines concurrently to integrate the function
over parts of the region. It creates an array of \lstinline'numThreads'

\lstinline'IntegTrap1Region' threads and starts them. Then, it loops,
waiting for them to terminate and adding their areas to the
total. When all subthreads have been processed, it returns the area.


The method Integrate()
\begin{verbatim}
includesnip: info.jhpc.textbook.chapter05.integration.threaded.IntegTrap1:one
\end{verbatim}

	
\subsection{Discussion}
A number of changes are possible that might be improvements, including the following:


\begin{enumerate}

\item{The integrate() method could create subthreads for all but one
of the regions and then caluclate the area of the final region
itself. This saves one thread creation, and thread creation is
expensive. The code doing the actual integration would be removed from
the run() method of IntegTrap1Region and packaged in a separate static
method.}

\item{As an alternative to using join() and looking in a field for the
result, the subthread can return its result in a SimpleFuture object
(see Chapter 3, ``Futures'').}

\end{enumerate}

\section{RunQueue}
There is a problem with creating threads and waiting for them to
terminate: It is possible to create a lot of threads. Threads are
expensive to create and may pose problems for some garbage collectors.


An alternative provided by our thread package is the class
\lstinline'RunQueue' . The \lstinline'RunQueue' class allows you to queue up
\lstinline'Runnable' objects for execution. It creates its own threads to
run them. These threads loop, removing the \lstinline'Runnable' objects
and calling their \lstinline'run()' methods, so the threads are reused,
rather than having to be created and garbage collected for each
\lstinline'Runnable' object.


A \lstinline'RunQueue' object may be created with a parameterless
constructor. Alternatively, you can provide the maximum number of
threads that can be in existence at one time and, optionally, the
maximum number of threads that will be allowed to wait for more
runnables to be enqueued. Normally, the \lstinline'RunQueue' object will
allow as many threads to be created as necessary to run all the
runnables enqueued. That is the only safe default, since restricting
the number can result in deadlock if you aren't careful. However,
there are circumstances where a smaller limit may be best. They occur
particularly when all the runnables are guaranteed to run to
completion without blocking. We will discuss this programming style
later, particularly in the chapters on chores. (See Chapter 7,
``Chores \ref{../D/Chores.xml#id(96750)}[MISSING HREF] ,'' and Chapter
8, `` \ref{../D/ChoreGroups.xml#id(96750)}[MISSING HREF] Thread and
Chore Synchronization.'')

	
\begin{verbatim}
        RunQueue
       constructors
        RunQueue()
        RunQueue(maxCreatable)
\end{verbatim}

	
\subsubsection{RunQueue Methods}

\subsection{run(runnable)}
The primary method provided by \lstinline'RunQueue' enqueues a
\lstinline'runnable' for execution. There are three names for this method
depending on how the programmer thinks of \lstinline'RunQueue' . Method
\lstinline'rq.run(r)' says to run the \lstinline'Runnable'

\lstinline'r' . Method \lstinline'rq.put(r)' acknowledges that \lstinline'rq'
is a queue and puts \lstinline'r' into the queue. Method
\lstinline'rq.runDelayed(r)' is provided for convenience. It's use won't
be obvious until we discuss chores.

	
\begin{verbatim}
          RunQueue
         methods 
          public void run(runnable)
          public void put(runnable)
          public void runDelayed(runnable)
\end{verbatim}

	
\subsection{Managing the created threads.}
The rest of \lstinline'RunQueue' 's methods are related to managing the
threads it creates. These threads are implemented in the internal
class \lstinline'Xeq' , so we will call them \lstinline'Xeq' threads. There
are five controlling attributes of a \lstinline'RunQueue' object:


\begin{verbatim}
         Remaining 
          RunQueue
         methods
          public void setMaxThreadsWaiting(int n)
          public void setMaxThreadsCreated(int n)
          public int getMaxThreadsWaiting()
          public int getMaxThreadsCreated()
          public int getNumThreadsWaiting()
          public int getNumThreadsCreated()
          public void terminate() 
          public void setWaitTime(long n)
          public long getWaitTime()
          public void setPriority(int n)
          public int getPriority()
          public void setDaemon(boolean d)
          public boolean getDaemon()
\end{verbatim}

	
\begin{enumerate}

\item{maxThreadsCreated : The maximum number of Xeq threads that may
be in existence at any one time. By default, this will be the maximum
positive integer. You can set it to some other value by calling
setMaxThreadsCreated() .}

\item{maxThreadsWaiting : The maximum number of threads that may wait
for more runnables to be enqueued. By default, this also will be the
maximum positive integer. You can set it to some other value by
calling setMaxThreadsWaiting() .}

\item{priority : The priority of the threads that are created. If you
want it to be something other than normal priority, you can call
setPriority() .}

\item{waitTime : The number of milliseconds a thread will wait for a
new runnable before terminating.}

\item{makeDaemon : If true, Xeq threads will be created as daemon
threads; if false, they will not.}

\end{enumerate}

\subsection{Termination}
The most important of these from your point of view are
\lstinline'setMaxThreadsWaiting()' and \lstinline'terminate()' . You will
need to use one of them to get the \lstinline'Xeq' threads to
terminate. If you do not eliminate the threads as soon as you are
done, they will stay in existence and waste system resources.


To eliminate a thread, when you are done with the object \lstinline'rq' ,
call \lstinline'rq.setMaxThreadsWaiting(0)' . Any waiting \lstinline'Xeq'
threads are awakened, and seeing that the number of waiting threads
now allowable is zero, they will terminate. You could set the maximum
number of threads allowed to wait to zero before you are done with the
run queue, but if the run queue is allowed to create the maximum
number of threads, a new thread will be created whenever a runnable is
enqueued, and that's no advantage over just creating and starting
threads yourself.


You can alternatively call \lstinline'terminate()' , which will both set
the allowed number of waiting threads to zero and set a flag to
disable the run queue. If more runnables are enqueued, they will not
be run.


A third possibility is to call \lstinline'rq.setWaitTime(t)' to force
waiting threads to terminate after \lstinline't' milliseconds. If you set
this time too low, the run queue may not allow threads to stay around
long enough to be reused when more runnables are enqueued
later. However, when there will be no more runnables enqueued, it is
safe to set the wait time to zero. As with \lstinline'maxThreadsWaiting'
, if this value is changed, all waiting threads are awakened, and if
they find no runnables to execute, they will try waiting again with
the new wait time. If a time out occurs and a thread wakes up to find
no runnable waiting, it will terminate.


\subsection{Adjusting maxThreadsCreated}
The field \lstinline'maxThreadsCreated' limits the number of threads that
may be in existence at any one time (not the overall
number). Normally, a runnable placed in the run queue will be given a
thread to execute it immediately with no limits on the number of
threads created.


If these runnables can themselves make parallel subroutine calls
placing their runnables in the queue, then it is essential that there
be no limits on the number created. If the limit is, say, four, the
system could easily deadlock with four callers holding on to their
threads waiting for their subroutines to terminate, while the
subroutines can't run at all, because they are in the run queue
waiting for threads to become available to run them.


So why might you ever wish to set the number lower? If you can do it
safely, you may want to throttle your parallelism:


\begin{itemize}

\item{Threads are expensive to create. They take up a lot of
storage. They take a fraction of processor time while they are
runnable.}

\item{You have only a limited number of processors available. Beyond
the number of available processors, more runnable threads will not
give you more parallelism.}

\end{itemize}

Thus, you might prefer a run queue that does not create more than a certain number of threads.


The problem, of course, is that to be safe, the runnables you place in
the queue should not block the thread executing them waiting for
events that can be caused by other runnables in the queue. We call
runnables that do not block their threads chores. We will discuss
chores and how to synchronize them in a later chapter.


\subsection{Adjusting maxThreadsWaiting}
The purpose of a run queue is to allow \lstinline'Xeq' threads to be
reused for running more than one runnable. There are two ways this can
happen:


\begin{enumerate}

\item{An Xeq thread can find another runnable waiting when it
completes processing one. You would expect this to be common only when
the maximum number of threads is limited, since, otherwise, a thread
is created for each runnable enqueued, and the thread that completes
another runnable must slip in after the runnable is enqueued, but
before the created thread dequeues it.}

\item{An Xeq thread can already be waiting for another runnable to be
enqueued. It will be awakened when the runnable is put in the queue.}

\end{enumerate}
Why might you want to limit the number? You have already seen one
reason: You want to get rid of the threads in a run queue that you are
no longer using. You can set the maximum number of waiting threads to
zero after you are done with the run queue.


A good reason to set the limit before you are done with the run queue
is to save space. Suppose you generally have no more than 5 runnables
active at any one time, but once you may have up to 20. You could set
\lstinline'maxThreadsWaiting' to 5 to handle the normal case and just pay
the thread-creation and garbage-collection cost for the other 15. That
way, you don't have 20 threads taking up space for long periods.


\subsection{Adjusting waitTime}
You might prefer to adjust \lstinline'waitTime' rather than
\lstinline'maxThreadsWaiting' . The parameter \lstinline'waitTime'
determines how many milliseconds a thread will wait for a runnable to
be enqueued before terminating on its own. It has the advantage that
it will automatically free storage for unneeded threads. However, to
use it well, instead of having to know the normal maximum number of
runnables in the system, you need to know something about the
interarrival time distribution.


\subsubsection{RunQueue Implementation}
Implementing \lstinline'RunQueue' requires solving some tricky problems
in thread synchronization, as we will see in this section.


\subsection{Fields}

\lstinline'RunQueue' has the following fields:


	%\begin{tabular}
	Fields of
\textbf{\lstinline'RunQueue'}
RunQueueObject
\lstinline'protected QueueComponent'

\lstinline'runnables=new QueueComponent();'

\lstinline'protected volatile int numThreadsWaiting=0;'

\lstinline'protected volatile int numNotifies=0;'

\lstinline'protected volatile int maxThreadsWaiting=Integer.MAX\_VALUE;'

\lstinline'protected volatile int numThreadsCreated=0;'

\lstinline'protected volatile boolean goOn=true;'

\lstinline'protected volatile int maxThreadsCreated=Integer.MAX\_VALUE;'

\lstinline'protected volatile int priority=Thread.NORM\_PRIORITY;'

\lstinline'protected volatile long waitTime=...;'

\lstinline'protected volatile long makeDaemon=true;'

	%\end{tabular}
	
\begin{itemize}

\item{runnables is the actual queue that the runnables are placed in.}

\item{numThreadsWaiting is the number of Xeq threads waiting for
runnables to execute.}

\item{numNotifies is the number of waiting threads awakened by
notify() operations, but not started running yet. As described later,
this is used by a waking thread to recognize whether it has been
awakened by a notify() or by a time out.}

\item{maxThreadsWaiting is the maximum number of threads that are
allowed to wait for runnables at any one time.}

\item{numThreadsCreated is the number of Xeq threads that are
currently in existence.}

\item{goOn is set to false by method terminate() to shut down the run
queue.}

\item{maxThreadsCreated is the maximum number of threads that are
allowed to be created at any one time.}

\item{priority is the thread priority at which the Xeq threads will
run.}

\item{waitTime is the number of milliseconds an Xeq thread will wait
to be awakened before timing out and terminating.}

\item{makeDaemon determines whether or not a created thread will be
daemon. It's default value is true.}

\end{itemize}

\subsection{Xeq threads}
The states of an \lstinline'Xeq' thread are shown in Figure 5-2. It's
code is shown in Example 5-5. An \lstinline'Xeq' thread just loops
getting runnables from the \lstinline'runnables' queue and running
them. Flag \lstinline'goOn' will tell it if it should continue executing
or if the \lstinline'RunQueue' has been terminated.


\includegraphics{figures/ParSubs-5}

	%\begin{tabular}
	Code for class Xeq
\begin{verbatim}
         protected class Xeq extends Thread {
         public void run() {
         Runnable  r;
         try {
         while (goOn) {
         r = dequeue();
         r.run();
         }
         } catch (InterruptedException ie){//nothing
         } catch (Exception e){
         e.printStackTrace();
         }
         numThreadsCreated--;
         }
         }
\end{verbatim}

	There are two kinds of exceptions that \lstinline'Xeq' threads
handle. An \lstinline'InterruptedException' is used to indicate that the
thread timed out and should terminate. Any other kind of exception
causes the thread to terminate with a stack trace, the same as
uncaught exceptions terminating normal threads.


\subsection{Enqueueing Runnables}
The code to put runnables into the run queue is shown in
\ref{ParSubs.xml#id(11106)}[MISSING HREF] .


	%\begin{tabular}
	Code to put runnables into a run queue.
\begin{verbatim}
         public void put(Runnable  runnable){
         boolean createThread=false;
         synchronized (this) {
         runnables.put(runnable);
         if (numThreadsWaiting>0) {
         numThreadsWaiting--; numNotifies++;
         notify();
         } else if (numThreadsCreated<maxThreadsCreated) {
         numThreadsCreated++;
         createThread=true;
         }
         }
         if (createThread) {
         Thread t=new Xeq();
         t.setPriority(priority);
         t.setDaemon(makeDaemon);
         t.start();
         }
         }
\end{verbatim}

	The \lstinline'put()' method is divided into two parts. The first
puts the runnable into the \lstinline'runnables' queue and decides
whether to wake up a waiting thread, create a new thread, or do
nothing. The second part actually creates the new thread. The second
part isn't included in the first, because thread creation is slow, and
we do not wish to leave the run queue locked during the creation of
the thread.


To wake up a waiting thread, the \lstinline'put()' method decrements the
number of waiting threads and increments the number of threads being
awakened ( \lstinline'numNotifies' ). The reasons for this are as
follows:


Why does the thread putting the runnable into the queue decrement the
number of threads waiting, rather than let that thread decrement the
number when it wakes up? The reason is that the notified thread does
not get to lock the run queue immediately. Other threads can get in
before it runs. Suppose there is one thread waiting and 10 threads
enqueue runnables before the waiting thread runs. If the enqueueing
thread didn't decrement the count of threads waiting, it would appear
to all 10 threads that there is a waiting thread to run its runnable,
so all would try to awaken it, but only the one thread would actually
wake up. That would leave nine runnables without threads to execute
them.


Why is it necessary to keep track of the number of threads being
awakened ( \lstinline'numNotifies)' ? It is used in method
\lstinline'dequeue()' , described later, to decide whether it was
awakened explicitly or whether it timed out. When the \lstinline'Xeq'
thread awakens, if \lstinline'numNotifies' is nonzero, it assumes that it
was awakerned on purpose. If \lstinline'numNotifies' is zero, it assumes
it timed out.


\subsection{Dequeueing Runnables}
The code that \lstinline'Xeq' threads use to dequeue runnables is shown
in \ref{ParSubs.xml#id(28357)}[MISSING HREF] . When there are
runnables in the queue, it will remove one and return it. If the queue
is empty, it must either wait or terminate. It will wait if the number
of threads already waiting is less than the maximum number of threads
permitted to wait. Otherwise, it is not allowed to wait, so it
terminates by throwing an \lstinline'InterruptedException,' which the
\lstinline'run()' method in the \lstinline'Xeq' class interprets as a
request for termination.  \ref{#id(pgfId-14565)}[MISSING HREF]


	%\begin{tabular}
	Dequeueing runnables by Xeq threads.
\begin{verbatim}
         protected synchronized Runnable dequeue() 
         throws InterruptedException {
         Runnable  runnable; 
         while (runnables.isEmpty() ) {
         if (numThreadsWaiting<maxThreadsWaiting) {
         numThreadsWaiting++;
         wait(waitTime);
         if (numNotifies==0) {
         numThreadsWaiting--;
         throw new InterruptedException();
         } else {numNotifies--;}
         } else { //terminate
         throw new InterruptedException();
         }
         }
         runnable = (Runnable ) runnables.get();
         return runnable;
         }
\end{verbatim}

If the \lstinline'dequeue()' method waits, it specifies \lstinline'waitTime'
as a timeout period. When it falls out of the \lstinline'wait()' , it
doesn't know whether it timed out or was awakened by a \lstinline'notify'. 
In \ref{ParSubs.xml#id(96796)}[MISSING HREF] , the two states ``be
awakened'' and ``time out'' are circled to indicate that they execute
the same code.
 

How does the thread figure out which state it's in? It checks
\lstinline'numNotifies' . When a thread putting a runnable into the queue
wishes to wake up a waiting \lstinline'Xeq' thread, it increments
\lstinline'numNotifies' . The \lstinline'Xeq' thread sees that
\lstinline'numNotifies' is greater than zero, decides that it has been
awakened by a \lstinline'notify' rather than timing out, and then
decrements \lstinline'numNotifies' . If a thread wakes up and finds that
\lstinline'numNotifies' is zero, it will assume that it timed out so it
will decrement \lstinline'numThreadsWaiting' --since it is no longer
waiting and no one else decremented it--and will terminate by throwing
an \lstinline'InterruptedException' .


To determine whether this will always work, we consider some possible anomolies:


\begin{enumerate}

\item{Suppose one Xeq thread is waiting, times out, and before it
runs, a runnable is enqueued. The thread enqueueing the runnable will
see that numThreadsWaiting is greater than zero, decrement it,
increment numNotifies to one, and issue a notify() , which will have
no effect. When the Xeq thread falls out of the call to wait() , it
will decide that it has been awakened, which is what we need.}

\item{Suppose two Xeq threads are waiting, and that one times out, but
before it can sieze the run queue, the other thread is awakened by a
notify() . The timed-out thread may run first and, thinking that it
was awakened, go off to run the runnable that was enqueued. The
notified thread may run second, think that it timed out, and
terminate. The overall effect is correct, even if the actual threads
did the opposite of what they theoretically should have.}

\end{enumerate}

Note that the test \lstinline'runnables.isEmpty()' is in a loop, since
before a notified thread can dequeue a runnable, another \lstinline'Xeq'
thread could call \lstinline'dequeue()' and grab the runnable for itself.


\subsection{setMaxThreadsWaiting()}
Changing the number of threads allowed to wait results in one having
to wake up threads in excess of the new number to allow them to
terminate. This is complicated by the common code for the ``be
awakened'' and ``time out'' states. What we do is to treat them as
threads being awakened and let them decide not to wait again, because
they would exceed the maximum number of threads now allowed.


	%\begin{tabular}
	The RunQueue method setMaxThreadsWaiting()
\begin{verbatim}
         public synchronized void setMaxThreadsWaiting(int n){
         maxThreadsWaiting=n;
         numNotifies += numThreadsWaiting;
         numThreadsWaiting=0;
         notifyAll();
         }
\end{verbatim}

	
\subsection{makeDaemon().}
The \lstinline'makeDaemon' field determines whether the created
\lstinline'Xeq' threads will be daemon threads or user threads. By
default, they will be daemons. Why? If there are \lstinline'Xeq' threads
still in existence when the rest of the user program terminates, we
don't want these threads to keep the program running.


\section{Recursive Shell Sort: RunQueue s and SimpleFuture s}
The Shell sort, named after its inventor Donald Shell, is an
improvement on the insertion sort, which, as you recall, divides the
array to be sorted into two parts: the part that has not yet been
sorted and the part that has. Initially, none have been sorted. The
algorithm works by taking one element at a time from the portion that
has not been sorted and inserting it into its proper place in the
portion that is sorted.


The insertion sort runs faster if the array is already almost ordered
when the algorithm starts, since each element being inserted won't
need to be moved far. Shell's innovation was to sort subsequences of
the array (e.g., elements \emph{h} positions apart) to put the array
into a nearly sorted order before performing the final sort.


Using the divide-and-conquer approach, class \lstinline'ShellsortDC'
recursively divides the array into two interspersed subarrays, sorts
them, and then sorts the overall array. The two interspersed subarrays
are the even-numbered elements and the odd-numbered elements. If the
subarray has few enough elements, it is sorted by the insertion
sort. If it is longer, it is itself sorted recursively by a
\lstinline'ShellsortDC' object.


\subsection{ShellsortDC}
The code for \lstinline'ShellsortDC' , except for the contained class,
which actually does the sorting, is shown in
\ref{ParSubs.xml#id(78336)}[MISSING HREF] .


	%\begin{tabular}
	Divide-and-conquer Shell sort
\begin{verbatim}
        class ShellsortDC {
        static int minDivisible=3;
        private static class Sort implements Runnable {
        .....
        }
        static int numInSequence(int i, int k, int n){
        return (n-i+k-1)/k;
        }
        static void isort(int[] a,int m,int k) {
        int i,j;
        for (j=m+k; j<a.length; j+=k) {
        for (i=j; i>m && a[i]>a[i-k]; i-=k) {
        int tmp=a[i];
        a[i]=a[i-k];
        a[i-k]=tmp;
        }
        }
        }
        public static void sort(int[] a) {
        SimpleFuture f=new SimpleFuture();
        RunQueue rq=new RunQueue(); 
        rq.run(new Sort(a,0,1,f,rq));
        try{ f.getValue();  
        }catch(Exception ex){}
        rq.setMaxThreadsWaiting(0);
        }
        }
\end{verbatim}

Method \lstinline'numInSequence(i,k,n)' calculates the number of
elements in a subsequence starting at position \lstinline'i' , stepping
by \lstinline'k' , and not extending up to or beyond \lstinline'n' . It is
used to decide whether to sort a sequence recursively with
\lstinline'ShellsortDC' or simply with an insertion sort. The field
\lstinline'minDivisible' is the size limit beneath which insertion sort
is used.


An insertion sort is performed by method \lstinline'isort()' . Method
\lstinline'isort(a,m,k)' sorts the subsequence of array \lstinline'a'
starting at \lstinline'm' and stepping by \lstinline'k' . It is not
optimized. More efficient implementations would remove the element to
be inserted, use a binary search to decide where the element goes,
move blocks of elements over to make space, and drop it in.


Method \lstinline'sort(a)' sorts the integer array \lstinline'a' by
deligation to the internal, runnable class \lstinline'Sort' . When run,
\lstinline'Sort(a' , \lstinline'i' , \lstinline'h' , \lstinline'f' , \lstinline'rq)'
will sort the subsequence of \lstinline'a' starting at position
\lstinline'i' and stepping by \lstinline'h' . It will assign a null value to
future \lstinline'f' when it is done, and it will use run queue
\lstinline'rq' for running subsorts. Method \lstinline'sort()' has to create
\lstinline'SimpleFuture' \lstinline'f' and \lstinline'RunQueue'

\lstinline'rq' . It creates the \lstinline'Sort' object, telling it to sort
array \lstinline'a' from position 0 by step 1 (i.e., all of the
array). It puts the \lstinline'Sort' into \lstinline'rq' to be run and waits
for it to complete by calling method \lstinline'getValue()' . At the end,
it sets the maximum number of \lstinline'Xeq' threads permitted to wait
in \lstinline'rq' to zero, forcing them to terminate.


\subsection{ShellsortDC Sort class}
The \lstinline'Sort' class (see \ref{ParSubs.xml#id(76835)}[MISSING HREF]
) contains two methods: the \lstinline'run()' method, of course, and a
\lstinline'sort()' method that does its real work.


	%\begin{tabular}
	Internal Sort class of the divide-and-conquer Shell sort
\begin{verbatim}
        private static class Sort implements Runnable {
        int[] a; int i, h; SimpleFuture f; RunQueue rq;
        Sort(int[] a, int i, int h, SimpleFuture f,RunQueue rq){
        this.a=a; this.i=i; this.h=h; this.f=f; this.rq=rq;
        }
        void sort(int i, int h) {
        if (numInSequence(i,h,a.length)<=minDivisible)
        isort(a,i,h);
        else {
        SimpleFuture nf=new SimpleFuture();
        Sort s=new Sort(a,i+h,2*h,nf,rq);
        rq.run(s);
        sort(i,2*h);
        try{
        nf.getValue();
        }catch(InterruptedException iex){}
        isort(a,i,h);
        }
        }
        public void run() {
        sort(i,h);
        f.setValue(null);
        }
        }
\end{verbatim}

Method \lstinline'sort(i,h)' sorts the subsequence of the array starting
at position \lstinline'i' and stepping by \lstinline'h' . It will call
\lstinline'isort()' to sort a small enough sequence. If the sequence is
long enough, it recursively sorts both odd and even subsequences
concurrently. The even subsequence consists of positions i, i+2h,
i+4h, .... The odd subsequence consists of positions i+ \emph{h} , i+3
\emph{h} , i+5 \emph{h} , ....


Method \lstinline'sort()' creates another \lstinline'Sort' object to sort
the odd subsequence with a future in which to signal its completion
and puts it in the run queue to be executed. Then, method
\lstinline'sort()' calls itself recursively to sort the even
subsequence. After both subsequences have been sorted, \lstinline'sort()'
calls \lstinline'isort()' to merge them into a single sorted sequence.
 All that method \lstinline'run()' need do is call \lstinline'sort()'
and then set the simple future to null to report that it is done.


\section{Accumulator}
The code for \lstinline'IntegTrap1' was a bit awkward. We had to build an
array of Threads to keep track of all the subthreads we had to join
with and extract return values from. It would be nice if there were
only one data object to keep track of, and we could use this object to
wait for the completion of a set of threads and accumulate the sum of
the values the threads have computed. Accumulators were designed for
this use.


An \lstinline'Accumulator' has a count, a data object, and a future
associated with it. Subthreads can signal the accumulator,
decrementing its count. When the count becomes zero, the data value is
assigned to the future by calling its \lstinline'setValue()' method. The
calling thread can wait for all the subthreads to signal the
accumulator by getting the value from the future. If the subthreads
are to return values that are, for example, to be summed, the
subthreads themselves can add their results to the data object before
signaling, which allows the caller to get the sum directly from the
future.


Accumulators use the \lstinline'Future' class, not \lstinline'SimpleFuture'
.  \lstinline'Future' is a subclass of \lstinline'SimpleFuture' , so all the
\lstinline'SimpleFuture' methods will work.  \lstinline'Future' adds the
method \lstinline'runDelayed()' to submit runnables to a
\lstinline'RunQueue' object when the value is assigned (via method
\lstinline'setValue()' ). This will be discussed in more detail later.


\subsubsection{Accumulator Operations}
An \lstinline'Accumulator' has the following operations:


	%\begin{tabular}
	
\textbf{\lstinline'Accumulator'}
Accumulatoroperations
\begin{verbatim}
         Constructors
         Accumulator(int n)
         Accumulator(int n, Object data)
         Accumulator(int n, Object data, Future f)
         Methods
         void signal()
         Future getFuture()
         Object getData()
         void setData(Object val)
         void runDelayed(Runnable r)
\end{verbatim}

	
\begin{itemize}

\item{Accumulator(int n) creates an Accumulator object that will wait
for n completions. It will assign null to its data value, so null will
also be sent to the future if method setData() has not been called.}

\item{Accumulator(int n, Object data) creates an Accumulator object
that will wait for n completions before placing Object data in its
Future variable.}

\item{Accumulator(int n, Object data, Future f) creates an Accumulator
object that will wait for n completions before placing data in Future
f .}

\item{void signal() is called by a thread to signal that it's
operation on the accumulator is complete. The n th of these signals
will place the contents of the Accumulator object's data field in its
Future variable.}

\item{Future getFuture() gets the Future object that will have its
value set upon the correct number of signals.}

\item{Object getData() gets the data object.}

\item{void setData(Object val) sets the data object to val .}

\item{void runDelayed(Runnable r) will place the runnable r in a run
queue after n signals have accumulated. It delegates it to the
runDelayed() method in class Future .}

\end{itemize}

\subsubsection{Patterns of Use of Accumulators}
Accumulators allow several different patterns of use.


\subsection{Awaiting completion}
To simply await the completion of a set of subthreads, the data value
may be ignored. The calling thread would do something similar to the
following:


\begin{verbatim}
Accumulator a=Accumulator( n);
...
a.getFuture().getValue();
\end{verbatim}



\begin{verbatim}
a.signal();
\end{verbatim}

If the subthreads are to test whether some condition is true and the
caller needs to conjoin \emph{(} and) or disjoin \emph{(} or) their
results, the calling thread can initialize the accumulator to the
default value, and subthreads can conditionally assign their result to
the data value.


For example, suppose the caller needs the conjunction of the results
of the subthreads. The caller might do the following:


\begin{verbatim}
Accumulator a=Accumulator(n,new Boolean(true));
...
if (((Boolean)a.getFuture().getValue())
.booleanValue())....
\end{verbatim}



\begin{verbatim}
if (!result) a.setData(new Boolean(false));
a.signal();
\end{verbatim}



\begin{verbatim}
Accumulator a=Accumulator(n,new Boolean(false));
...
if (((Boolean)a.getFuture().getValue())
.booleanValue())....
\end{verbatim}



\begin{verbatim}
if (result) a.setData(new Boolean(true));
a.signal();
\end{verbatim}

For an associative, commutative operation on a primitive type, you
must wrap the primitive type in an object. We consider one approach
for summation:


To get the sum in variable \lstinline'v' the caller does the following:


\begin{verbatim}
Accumulator a=Accumulator(n,new Double(0.0));
...
v = ((Numeric)a.getFuture().getValue()).doubleValue();
\end{verbatim}


\begin{verbatim}
synchronized(a) \lbrack 
a.setData(new Double(
((Double)a. getData ()).doubleValue()+result));
\rbrack 
a.signal();
\end{verbatim}


\subsection{Shared data structures}
To update a shared data structure, the subthreads simply get a
reference to the data object and update it. For example, to have the
subthreads place associations in a hash table, the caller might
execute the following statements:


\begin{verbatim}
Accumulator a=Accumulator(n,new Hashtable());
...
Hashtable h = (Hashtable)a.getFuture().getValue();
\end{verbatim}



\begin{verbatim}
((Hashtable)a.getData ()).put(key,value);
a.signal();
\end{verbatim}

\lstinline'put()' method of class \lstinline'Hashtable' is synchronized, the
subthreads do not have to lock the hash table themselves.


\section{Using Accumulators}

\subsubsection{Numeric Integration}
With run queues and accumulators, we can perform another, more
efficient version of numeric integration, \lstinline'IntegTrap3' . There
are parameters to \lstinline'IntegTrap3' to handle the number of threads
and the number of regions separately. Why? You probably want to
consider the number of processors when you set the number of
threads. You won't get all the processors all the time assigned to
your threads. There are probably more runnable threads in your
program, and there are other processes running on your machine that
may be using some of the processors. This would indicate that you may
not want to create more threads than there are processors, indeed,
maybe not more than the number of processors you can reasonably expect
to get at any time. On the other hand, you might decide to try to
steal processors away from other work or other users by creating more
threads, increasing the likelihood that when a processor is given to
the next runnable thread, the thread will be one of yours.


	%\begin{tabular}
	
\textbf{\lstinline'IntegTrap3'}
IntegTrap3interface
\begin{verbatim}
         public
         IntegTrap3(int
         numThreads,int
         numRegions,int
         granularity ) 
         public double integrate(double a, double b, F_of_x fn)
\end{verbatim}

The many threads you create are unlikely to all run at the same
rate. Some will complete sooner than the others. If you divide up the
work evenly among them, you will have to wait for the slow ones to
complete. It would be nice to take some of the work from the slow
threads and give it to those that are done first. That is what
creating a lot of regions does; the threads that are done first can
grab more work.


In choosing both the number of threads and the number of regions,
remember that you are aiming for a medium grain size. The fewer
threads and regions there are, the more coarse grained your
computation is, and the more likely your work is to be delayed by
unballanced loads. The more regions there are, the finer the grain is,
and the larger the fraction of time your program will spend creating
and queueing them.


The code for the \lstinline'integrate()' method is shown in
\ref{ParSubs.xml#id(73088)}[MISSING HREF] . One major difference from
\lstinline'IntegTrap1' is that it uses an accumulator. The subthreads to
add their areas into its data field and signal their completion
through it. The \lstinline'integrate()' method can then just get the
total area out ot the accumulator's \lstinline'Future' variable.


	%\begin{tabular}
	integrate method in IntegTrap3 , using Accumulator and RunQueue
\begin{verbatim}
        public double integrate(double a, double b, F_of_x fn){
        int i;
        double totalArea = 0.0d;
        Accumulator acc=null;
        if( a > b )
        throw new BadRangeException(); 
        if( a == b )
        throw new NoRangeException();
        RunQueue regionQueue = new RunQueue( numThreads );
        try {
        double range = b - a;
        double start = a;
        double end = a + ((1.0d)/numRegions * range);
        acc=new Accumulator(numRegions,new Double(0.0));
        for( i=0; i < numRegions; i++ ) {
        regionQueue.put( new IntegTrap3Region(start,
        end,granularity, fn, acc ));
        start = end;
        end = a + ((i + 2.0d)/numRegions * range);
        }
        }
        catch( Exception e ) {
        System.out.println("Exception occured in" +
        " creating and initializing thread.\n" +
        e.toString() );
        }
        try {
        totalArea= ((Double)acc.getFuture().getValue()).
        doubleValue();
        } catch(Exception e) {
        System.out.println( 
        "Could not retrieve value from " +
        "Accumulator's Future." );
        System.exit( 1 );
        }
        regionQueue.setMaxThreadsWaiting(0);
        return totalArea;
        } 
\end{verbatim}

It uses \lstinline'RunQueue'

\lstinline'regionQueue' in which to run the subthreads. At the end, it
sets the maximum number of threads permitted to wait to zero, causing
them all to terminate.


The code for \lstinline'IntegTrap3Region' is shown in
\ref{ParSubs.xml#id(43742)}[MISSING HREF] . It is a straightforward
implementation of the associative, commutative operator pattern for
using accumulators.


	%\begin{tabular}
	IntegTrap3Region adding value to an Accumulator
\begin{verbatim}
        class IntegTrap3Region implements Runnable {
        private String name;
        private double x_start, x_end;
        private int granularity;
        private F_of_x f;
        private Accumulator result;
        public IntegTrap3Region( double x_start, double x_end, 
        int granularity, F_of_x f, Accumulator result ) {
        this.name = new String( x_start + "-" + x_end );
        this.x_start = x_start;
        this.x_end = x_end;
        this.granularity = granularity;
        this.f = f;
        this.result = result;
        }
        public void run() {
        double area = 0.0d;
        double range = x_end - x_start;
        double g=granularity;
        for( int i=granularity-1; i>0; i-- ) {
        area += f.f((i/g)*range+x_start);
        }
        area += (f.f(x_start)+f.f(x_end))/2.0;
        area = area*(range/g);
        synchronized (result) {
        result.setData( new Double( 
        area+((Double)result.getData()).doubleValue()));
        }
        result.signal();
        }
        }
\end{verbatim}

	
\section{TerminationGroup}
An accumulator allows a caller to wait for a fixed number of
subthreads to terminate. Sometimes, however, you won't initially know
how many subthreads will be created. Sometimes, subthreads create
further subthreads, which create further still, and the only
synchronization required is waiting for them all to terminate.


To handle this case, we provide the interface
\lstinline'TerminationGroup'

\textbf{\lstinline'TerminationGroup'}
TerminationGroupmethods
\begin{verbatim}
        void  awaitTermination() 
        TerminationGroup fork() 
        void runDelayed(Runnable r) 
        void terminate() 
\end{verbatim}

	
\lstinline'TerminationGroup' is an interface. The class that implements
the interface is \lstinline'SharedTerminationGroup' . The name leaves
open the ability to create a \lstinline'DistributedTerminationGroup'
class later. The idea of a termination group is this: There are a
collection of \lstinline'TerminationGroup' objects in a termination
group. From any one of those objects, we can create other objects that
will also be in the group. We get to terminate each object in the
group precisely once. Once an object has been terminated, we are no
longer allowed to create new group members from it. We can, however,
reference any object in the group and call its
\lstinline'awaitTermination()' method. The \lstinline'awaitTermination()'
method will delay us until all objects in the termination group have
been terminated.


Interface \lstinline'TerminationGroup' declares the following methods:


\begin{itemize}

\item{void awaitTermination() waits for all elements of a termination
group to terminate.}

\item{TerminationGroup fork() creates another element in the
termination group and returns it. This method may not be called for an
element that has already been terminated.}

\item{void terminate() terminates this element of the termination
group. If this is the last member of the termination group to be
terminated, any threads awaiting termination will be allowed to run.}

\item{void runDelayed(Runnable r) delays the Runnable r until all
elements of the group have terminated and then places it in a run
queue. This will be discussed Chapter 7, ``Chores.''}

\ref{../D/Chores.xml#id(96750)}[MISSING HREF]

\end{itemize} The class \lstinline'SharedTerminationGroup' is used in a
shared memory system. It has two constructors:


\begin{itemize}

\item{SharedTerminationGroup(Future f) : A SharedTerminationGroup
object contains a Future object and assigns null to it when
termination has occurred. This constructor allows you to supply the
future it uses yourself.}

\item{SharedTerminationGroup() : This form allocates a future for the
group. You don't need to get a reference to the future yourself, since
methods awaitTermination() and runDelayed(Runnable r) do all the
operations on the future that you would want.}

\end{itemize}

\ref{ParSubs.xml#id(19939)}[MISSING HREF] shows a picture of forking
and termination. The creation of a \lstinline'SharedTerminationGroup'
object must lead to a terminate. Each fork must also lead to a
termination.


\includegraphics{figures/ParSubs-6}

\section{Combinatorial Search}
There are a number of optimization problems that involve finding the
optimal combination of things (i.e., finding some combination of the
items that maximizes or minimizes some function subject to some
constraints).
 In the worst case, these algorithms will have to try out each
combination of the items, see if they meet the constraints, calculate
their value, and remember the best combination.  \emph{N} things have
2 \emph{N} combinations. Each additional item can double the number of
combinations and the amount of time you may have to spend.


For many of these problems, there are ways to cut down on the number
of combinations you have to search through. Quite often, the median
search time for these problems will be quite modest. But it is the
nature of these problems that there will be some instances that would
go on until after the sun burns out or you press control-C to force
the execution to quit.


\subsubsection{The 0-1 Knapsack Problem}
As an example of a combinatorial search problem, we will look at the
0-1 knapsack problem. In this metaphor, you have a knapsack that can
hold only a certain weight. You have a number of items that you could
put in the knapsack. Each item has a profit and a weight. You want to
pack the maximum value into the knapsack without exceeding its
capacity. You must include an item as a whole, or exclude it. You
can't include only a part of an item.


These rules are formulated as follows:


\includegraphics{figures/ParSubs-7}

\emph{xi} indicates whether or not item \emph{i} is included, so
\emph{xi} can only take on a 0 or a 1, hence the name 0-1 knapsack
problem.  \emph{pi} is the profit from including item \emph{i} , and
\emph{wi} is the weight of item \emph{i} .  \emph{C} is the capacity
of the knapsack.
 

Suppose you have \emph{N} items, numbered 0 through \emph{N}
-1. How do you search for a solution? Well, you could try writing
\emph{N} nested loops, each setting an \emph{xi} to 1 and 0 and
testing the constraint and evaluating the profit in the innermost
loop, remembering the best found.


That, however, is not a general-purpose program. What we can do is use
a recursive depth-first search (DFS) method to assign values to the
\emph{x} 's. Suppose when we call \lstinline'dfs(i,rc,P)' , all the
variables \emph{xj} , 0 <= \emph{j} < \emph{i} , have been assigned 0s
and 1s, leaving \lstinline'rc' remaining capacity and accumulating
\lstinline'P' profit. (See \ref{ParSubs.xml#id(35875)}[MISSING HREF] .)
If \emph{i} = \emph{N} , we have assigned values to all the \emph{x}'s. 
Setting the \emph{x} 's to one value specifies the combination of
the items we're including. If the remaining capacity \lstinline'rc' >= 0,
we have not exceeded the capacity of the knapsack. If \lstinline'P' is
greater than the profit we got for any previous combination, we should
remember this one.


If \emph{i} < \emph{N} , we haven't assigned all the \emph{xi} 's
yet. We should first try including the next item, assigning \emph{xi}
the value one and recursively calling \lstinline'dfs()' for position
\emph{i} +1. Then we assign \emph{xi} zero and recursively call the
function again.


	%\begin{tabular}
	Simple 0-1 algorithm (in pseudo code)dfs(i,rc,P):if i==N and rc>=0
and P>bestPremember this solutionelsexi = 1dfs(i+1,rc-wi,P+pi)xi =
0dfs(i+1,rc,P)
	%\end{tabular} 
These recursive calls are said to search a
state-space tree. (See \ref{ParSubs.xml#id(11885)}[MISSING HREF]
). Each assignment to the values of variables \emph{xj} , 0 <=
\emph{j} < \emph{i} , is known as a state of the search. The state
space is the set of all states. Our search treats the state space as a
tree, and each state is a node of the tree. The leaves of the tree
represent combinations of the items that we are considering including
in the knapsack.


There is an obvious optimization we can put in. If in some recursive
call of \lstinline'dfs()' we have already exceeded the capacity of the
knapsack, which we will know by \lstinline'rc' being less than zero, we
can omit searching the subtree. No leaf in it will meet the
constraint. This is known as a cutoff (cutting off the search if we
know it cannot lead to an optimum solution).


There's another cutoff that works well for the 0-1 knapsack problem,
but it requires sorting the items before running the algorithm. The
idea is that we can cut off searching a sub


tree if, with the profit accumulated so far and with the remaining
capacity, there is no way for the subtree to give us a better profit
than we have found so far.


\includegraphics{figures/ParSubs-8}

We sort the items by nonincreasing value per weight. This means that we will greedily gather up the items with the best profit for their weight first. To decide whether to continue, we multiply the remaining capacity times the profit per weight of the next item and add that to the profit we have accumulated thus far:
\lstinline'rc*p'
i
\lstinline'/w'
i
\lstinline'+P'
. This will tell us how much profit we could get if we could exactly fill the remainder of the knapsack with items as profitable for their weight as the next item. This will give us an upper bound on how much profit we could get exploring the current subtree, since no remaining item will have a greater profit per weight than this next item. If this upper bound is less than or equal to the best profit we've found so far, we can cut off searching this subtree. (See
\ref{ParSubs.xml#id(67587)}[MISSING HREF]
.)


	%\begin{tabular}
	Optimized 0-1 algorithm (in pseudo code)dfs(i,rc,P):if i==N and rc>=0 and P>bestPremember this solutionelseif rc*pi/wi + P <= bestP then returnif rc>=wi thenxi = 1dfs(i+1,rc-wi,P+pi)xi = 0dfs(i+1,rc,P)
	%\end{tabular}
	
\subsubsection{Parallel Depth-first Search for the Knapsack Problem}
A depth-first search has a huge number of procedure calls. If the procedures are independent, the program is a candidate for parallel execution.

In the algorithms sketched in
\ref{ParSubs.xml#id(35875)}[MISSING HREF]
and
\ref{ParSubs.xml#id(67587)}[MISSING HREF]
, there are two ways that the procedure calls are not independent:


\begin{enumerate}

\item{The x 's are global variables.}

\item{The best solution and it's profit are global variables.}

\end{enumerate}
If we can make these independent, we can parallelize the computation.

The
\emph{x}
's can be made independent by passing
\emph{x}
by value (i.e., giving each procedure instance its own copy).

The global variables for the best solution found so far and its profit can be kept in a monitor, a shared object that the threads lock. Indeed, if the profit can be kept in four bytes (e.g., in an
\lstinline'int'
or
\lstinline'float'
), it can be declared volatile and examined directly.

Thus, we can make the subroutines independent and parallelizable. That still leaves the problems of the grain size and flooding. The procedure calls are too parallelizable. We could create a thread for each call, but that would eat up memory at a ferocious rate, and the individual calls don't do much computation, so the grain size would be tiny. We need to throttle the computation.

Here's how we can do it. We decide on our grain size by the size of a search tree we would be willing to search by a recursive DFS method. We want to pick a number of levels (
\emph{k)}
that gives us a reasonable number of nodes to examine. If
\emph{k}
is 10, we will search through about a thousand nodes; if
\emph{k}
is 20, we will search through about a million. We then divide the search along level
\emph{N}
-
\emph{k}
in the tree. (See
\ref{ParSubs.xml#id(55921)}[MISSING HREF]
.) Beyond level
\emph{N}
-
\emph{k}
, the DFS method just searches an entire subtree out to the leaves. Above that level, we use a special version of our DFS algorithm to
\lstinline'generate'
parallel searches. When it gets down to level
\emph{N}
-
\emph{k}
, it creates a
\lstinline'search'

\lstinline'runnable'
to do the rest of the search. The
\lstinline'generate'
part of the algorithm then returns to follow another branch and generate another search.


\includegraphics{figures/ParSubs-9}
This is not the same algorithm as a simple depth-first search. The simple DFS can spend a long time searching a subtree that would have been cut off if another subtree had been searched first. The parallel solution can spend some time examining the other subtree, set a bounding value from it, and cut off the futile subtree. As a consequence, the parallel DFS can produce super-linear speedups. That is, parallel DFS with
\emph{N}
processors can run in less than
\includegraphics{figures/ParSubs-10}
th the time of the sequential. The extra processors can reduce each other's work. Of course, by the rule that speedup must be computed compared to the best sequential algorithm, perhaps we should be comparing the parallel version to a concurrent execution of itself on a single processor, which may remove the superlinearity.


\subsubsection{Knapsack2}

\lstinline'Knapsack2'
is a parallel DFS algorithm for the 0-1-knapsack problem. It's overall structure is shown in
\ref{ParSubs.xml#id(65901)}[MISSING HREF]
.


	%\begin{tabular}
	Knapsack2 : a parallel DFS algorithm for the 0-1-knapsack problem.
\begin{verbatim}
        class Knapsack2 {
        private static class Item{
        int profit,weight,pos;
        float profitPerWeight;
        }
        int LEVELS;
        BitSet selected;
        int capacity;
        volatile float bestProfit=0;
        Item[] item;
        RunQueue rq=new RunQueue();
        Future done=new Future();
        TerminationGroup tg =new SharedTerminationGroup(done) ;
        public BitSet getSelected() throws InterruptedException {
        // see Example 5-17
        }
        public int getProfit() throws InterruptedException {
        done.getValue();
        rq.setMaxThreadsWaiting(0);
        return (int)bestProfit;
        }
        void gen(int i, int rw, int p, BitSet b) {
        // see Example 5-15
        }
        public Knapsack2(int[] weights, int[] profits, int capacity,
        int LEVELS){
        // see Example 5-14
        }
        class Search implements Runnable {
        // see Example 5-16
        }
        }
\end{verbatim}

	The local class
\lstinline'Item'
and the corresponding array
\lstinline'item'
are used to keep all the relevant information about each item that could be included in the knapsack. Fields
\lstinline'profit'
and
\lstinline'weight'
are self-explanatory. Field
\lstinline'profitPerWeight'
keeps the ratio of the
\lstinline'profit'
and
\lstinline'weight'
fields. This value is used throughout the algorithm, so it is probably cheaper to store it than recompute it. The field
\lstinline'pos'
indicates the original position of the item in the arrays of profits and weights provided by the user. The array
\lstinline'item'
is sorted by nonincreasing
\lstinline'profitPerWeight'
to facilitate cutoffs.

Field
\lstinline'LEVELS'
is the number of levels the generator will go to before releasing separate depth-first searches. Assuming that there are no cutoffs due (e.g., the next item not fitting in the knapsack), we see that there will be 2
**LEVELSsearches that could be done in parallel. The larger the value of
\lstinline'LEVELS'
is, the greater the cost of creating search objects and queueing them for execution. The smaller the value of
\lstinline'LEVELS'
is, the deeper the searches will be. The grain size of the searches is inversely exponentially proportional to the value of
\lstinline'LEVELS'
.

Field
\lstinline'selected'
indicates the members of the item array that are selected in the best solution found so far. The field
\lstinline'bestProfit'
is the profit obtained from the selected items.

Field
\lstinline'rq'
is the run queue that the search runnables are placed in to be run in FIFO order.

Field
\lstinline'tg'
is the termination group that search runnables use to indicate that they have completed their part of the search.
\lstinline'Future'

\lstinline'done'
is assigned null by
\lstinline'tg'
when all searches have completed. Therefore, to wait for the search to be completed, one need only call


\lstinline'done.getValue()'

\subsection{Constructor}

	%\begin{tabular}
	Knapsack2 constructor
\begin{verbatim}
          public Knapsack2(int[] weights, int[] profits, int capacity,
          int LEVELS){
          this.LEVELS=LEVELS;
          if (weights.length!=profits.length)
          throw new IllegalArgumentException(
          "0/1 Knapsack: differing numbers of"+
          " weights and profits");
          if (capacity<=0)
          throw new IllegalArgumentException(
          "0/1 Knapsack: capacity<=0");
          item = new Item[weights.length];
          int i;
          for (i=0; i<weights.length; i++) {
          item[i]=new Item();
          item[i].profit=profits[i];
          item[i].weight=weights[i];
          item[i].pos=i;
          item[i].profitPerWeight=
          ((float)profits[i])/weights[i];
          }
          int j;
          for (j=1; j<item.length; j++) {
          for (i=j; 
          i>0 && 
          item[i].profitPerWeight >
          item[i-1].profitPerWeight;
          i--) {
          Item tmp=item[i];
          item[i]=item[i-1];
          item[i-1]=tmp;
          }
          }
          if (LEVELS>item.length) LEVELS=item.length;
          rq.setWaitTime(10000);
          rq.setMaxThreadsCreated(4);
          gen(0,capacity,0,new BitSet(item.length));
          tg.terminate();
          }
\end{verbatim}

	There are four sections to the
\lstinline'Knapsack2'
constructor:


\begin{enumerate}

\item{Check the arguments and throw an exception if an error is detected.}

\item{Initialize the item array.}

\item{Sort the item array by nonincreasing profit per weight. Here, we use an insertion sort. A faster sorting algorithm is not warranted for just a few items, and if there are a great many items, the exponential running time of the search itself will so dominate the running time as to make the sort trivial.}

\item{Do the actual search. The heart of this section is calling recursive routine gen() to generate the searches. There are several other things to do here as well:}

\item{Make sure that LEVELS isn't greater than the number of items, since the number of items is the full depth of the search tree.}

\item{Set parameters on rq . Here, we are not allowing more than four threads to be created.}

\item{After generating the searches, we call terminate() to terminate object tg . This terminate() will terminate the original instance of the termination group that was constructed when this Knapsack2 object was created.}

\end{enumerate}

\subsection{Method gen()}
Method
\lstinline'gen()'
searches the top levels of the state-space tree in class
\lstinline'Knapsack2'
. If it has searched through
\lstinline'LEVELS'
levels, it creates a
\lstinline'DSearch'
object to do the rest of the depth-first search and places it in
\lstinline'rq'
to be searched in parallel with other searches. For this search, we must clone the bitset
\lstinline'b'
that represents the
\lstinline'x'
variables and fork another element of the termination group
\lstinline'tg'
for the search to use.


	%\begin{tabular}
	Method gen() : top level of the tree in class Knapsack2
\begin{verbatim}
          void gen(int i, int rw, int p, BitSet b) {
          if (i>=LEVELS) {
          rq.run(new Search(i,rw,p,
          (BitSet)b.clone(),tg.fork()));
          return;
          }
          if (rw - item[i].weight >= 0) {
          b.set(i);
          gen(i+1,rw-item[i].weight,p+item[i].profit,b);
          }
          b.clear(i);
          gen(i+1,rw,p,b);
          return;
          }
\end{verbatim}

	Method
\lstinline'gen()'
walks over the state-space tree in a greedy order. It first includes the next item in the knapsack and generates all searches that include it. Then, it excludes the item and generates all searches that do not include it. Since the searches are executed by FIFO order, those trees that include a lot of the initial high profit per weight items will be searched first. It is highly likely that the best solution will be found quickly and will prevent a lot of the later, fruitless searches.


\subsection{The Search class}

	%\begin{tabular}
	Search class of Knapsack2
\begin{verbatim}
          class Search implements Runnable {
          BitSet selected;
          int from;
          int startWeight=0;
          int startProfit=0;
          TerminationGroup tg;
          Search(int from, 
          int remainingWeight,
          int profit,
          BitSet selected, 
          TerminationGroup tg) {
          this.from=from;
          startWeight=remainingWeight;
          startProfit=profit;
          this.selected=selected;
          this.tg=tg;
          }
          void dfs(int i, int rw, int p) {
          if (i>=item.length) {
          if (p>bestProfit) {
          synchronized(Knapsack2.this) {
          if (p>bestProfit) {
          bestProfit=p;
          Knapsack2.this.selected=
          (BitSet)selected.clone();
          }
          }
          }
          return;
          }
          if (p+rw*item[i].profitPerWeight<bestProfit) return;
          if (rw-item[i].weight>=0) {
          selected.set(i);
          dfs(i+1,rw-item[i].weight,p+item[i].profit);
          }
          selected.clear(i);
          dfs(i+1,rw,p);
          return;
          }
          public void run(){
          dfs(from,startWeight,startProfit);
          tg.terminate();
          }
          }
\end{verbatim}

	The internal class
\lstinline'Search'
handles the final depth-first search down to the leaves of the tree. Its fields are as follows:


\begin{itemize}

\item{selected : The bits give the values of the x variables: set with 1 and cleared with 0. Initially, it has the bits set that gen() gets.}

\item{from : This holds the position of the item at which the search starts. In Knapsack2 , this will equal LEVELS .}

\item{startWeight : This is the remaining capacity that this search can allocate.}

\item{startProfit : This is the profit accumulated before this search was created, the profits for the items in the initial value of selected .}

\end{itemize}

\lstinline'Search'
's method
\lstinline'run()'
is trivial. All it has to do is call the DFS method
\lstinline'dfs()'
and then terminate this search's instance of the termination group when
\lstinline'dfs()'
returns.

Method
\lstinline'dfs()'
does the real work and is reasonably straightforward. It's one unusual feature is the way it decides whether to record a new better solution. If it has reached a leaf, recognized by
\lstinline'i>=item.length'
, it first checks to see if its profit is better than the best found so far (
\lstinline'p>bestProfit'
). It can do this check relatively cheaply, since
\lstinline'bestProfit'
is a volatile shared variable. It can just fetch and examine it. Only if
\lstinline'p'
is greater than
\lstinline'bestProfit'
is it worth locking the enclosing
\lstinline'Knapsack2'
object containing the shared
\lstinline'selected'
and
\lstinline'bestProfit'
variables. It locks the enclosing object [by calling
\lstinline'synchronized(Knapsack2.this)'
] and then again checks that it still has a better profit. Some other thread could have changed the shared values before this thread got the lock.


\subsection{Methods getSelected() and g etProfit()}
Methods
\lstinline'getSelected()'
and
\lstinline'getProfit()'
have to wait for the search to terminate by calling
\lstinline'done.getValue()'
. This is a good place, knowing that the search is done, to set the number of threads allowed to wait in the run queue
\lstinline'rq'
to zero. The run queue won't be needed any more, now that the search is over.


	%\begin{tabular}
	Method getSelected() of Knapsack2
\begin{verbatim}
          public BitSet getSelected() throws InterruptedException {
          done.getValue();
          rq.setMaxThreadsWaiting(0);
          BitSet s=new BitSet(item.length);
          for(int i=0; i<item.length; i++) {
          if (selected.get(i)) s.set(item[i].pos);
          }
          return s;
          }
\end{verbatim}

	The loop in
\lstinline'getSelected()'
translates the
\lstinline'selected'
bit set into the correct form for the caller. The field
\lstinline'selected'
assigns bits in terms of positions in the
\lstinline'item'
array, whereas the bits returned to the caller must be in terms of the positions in the input
\lstinline'weight'
and
\lstinline'profit'
arrays.


\section{PriorityRunQueue}
The
\lstinline'PriorityRunQueue'
class has the same operations as
\lstinline'RunQueue'
, except that the operations to insert runnables into the queue take a second parameter, the priority. When an
\lstinline'Xeq'
thread removes a runnable to execute it, the thread will always get the runnable with the highest priority. If several have the highest priority, it will get an arbitrary one of them.


	%\begin{tabular}
	
\textbf{\lstinline'PriorityRunQueue'}
PriorityRunQueueconstructors and methods
\begin{verbatim}
        PriorityRunQueue()
        PriorityRunQueue(int maxCreatable)
        public void put(Runnable runnable, double priority)
        public void run(Runnable runnable, double priority)
        public void setMaxThreadsWaiting(int n)
        public void setMaxThreadsCreated(int n)
        public int getMaxThreadsWaiting()
        public int getMaxThreadsCreated()
        public int getNumThreadsWaiting()
        public int getNumThreadsCreated()
        public void setWaitTime(long n)
        public long getWaitTime()
        public void terminate() 
        public void setPriority(int n)
        public int getPriority()
        public void setDaemon(boolean makeDaemon)
        public boolean getDaemon()
\end{verbatim}

	This would make no real difference if the priority run queue is set up to create as many threads as needed to run all enqueued runnables. It only matters if some runnables are forced to wait.

The default value of
\lstinline'maxThreadsCreated'
for a
\lstinline'PriorityRunQueue'
object is one.

Although
\lstinline'PriorityRunQueue'
has much the same methods as
\lstinline'RunQueue'
, there is no class relationship between them. Neither is a subclass of the other, nor do they inherit from any common ancestor other than
\lstinline'Object'
.

We now present a couple of notes on the methods:


\begin{itemize}

\item{As in the class RunQueue , put() and run() are synonyms. Although they take a double-precision priority, the priority is kept internally as a float , so don't count on the full double-precision number of digits to make subtle distinctions between priorities.}

\item{Methods getPriority() and setPriority() refer to the priority of the Xeq threads that will execute the runnables. They have nothing to do with the priorities specified in the put() and run() methods.}

\end{itemize}

\section{Branch-and-Bound with Priority Run Queues}
Branch-and-bound algorithms are search algorithms. When the search has to make a choice, the branch-and-bound algorithm branches; in effect, it goes both ways. The branch-and-bound algorithm estimates which alternatives are most likely to produce better solutions, so that they can be pursued first. It also remembers the best solution it has found so far and ceases searches of alternatives that cannot produce a better solution. Finding good solutions quickly can prevent making a lot of futile computation.


\subsubsection{Branch and Bound for 0-1 Knapsack}
Using a priority run queue, threads can pursue the most promising paths first. Threads can dynamically generate searches and place them in the priority run queue. This gives a branch-and-bound algorithm.

The 0-1 knapsack branch-and-bound algorithm,
\lstinline'Knapsack3'
, is shown in
\ref{ParSubs.xml#id(74516)}[MISSING HREF]
. It is not a pure branch-and-bound algorithm, because it switches to depth-first search to search the final subtrees, just as does
\lstinline'Knapsack2'
.


	%\begin{tabular}
	Knapsack3 : Branch-and-bound algorithm
\begin{verbatim}
        class Knapsack3 {
        private static class Item{
        int profit,weight,pos;
        float profitPerWeight;
        }
        int LEVELS=5;
        BitSet selected;
        int capacity;
        volatile float bestProfit=0;
        Item[] item;
        PriorityRunQueue prq=new PriorityRunQueue();
        Future done=new Future();
        TerminationGroup tg =new SharedTerminationGroup(done) ;
        public BitSet getSelected() throws InterruptedException {
        //as in Knapsack2
        }
        public int getProfit() throws InterruptedException {
        //as in Knapsack2
        }
        class SearchFactory {
        // see Example 5-20
        }
        SearchFactory searchFactory= new SearchFactory();
        class Search implements Runnable {
        // see Example 5-19
        }
        public Knapsack3(int[] weights, int[] profits, int capacity,
        int DFSLEVELS){
        // see Example 5-22
        }
        }
\end{verbatim}

	One difference in the fields from
\lstinline'Knapsack2'
is that the run queue
\lstinline'rq'
has been replaced by a priority run queue
\lstinline'prq'
. Another difference is the addition of a field
\lstinline'searchFactory'
that references an object that allows us to recycle
\lstinline'Search'
objects, rather than having to create a new one each time we need one.


\subsection{Class Search}

	%\begin{tabular}
	Class Search in Knapsack3
\begin{verbatim}
         class Search implements Runnable {
         int i; int rw; int p; BitSet b; TerminationGroup tg;
         Search(int i,int rw,int p,
         BitSet b,TerminationGroup tg){
         this.i=i; this.rw=rw; this.p=p;
         this.b=b; this.tg=tg;
         }
         public void run(){
         // see Example 5-21
         }
         void dfs(int i, int rw, int p) {
         // same as in Knapsack2 with the name 
         // Knapsack2 replaced with Knapsack3
         }
         }
\end{verbatim}

	The structure of the
\lstinline'Search'
class in
\lstinline'Knapsack3'
is shown in
\ref{ParSubs.xml#id(55469)}[MISSING HREF]
. It combines the functions of both method
\lstinline'gen()'
and class
\lstinline'Search'
in
\lstinline'Knapsack2'
. In translating
\lstinline'Knapsack2'
to
\lstinline'Knapsack3'
, the parameter names of
\lstinline'gen()'
became the field names of
\lstinline'Search'
, so if you wish to do a comparison of the code,
\lstinline'i'
in
\lstinline'Knapsack3'
corresponds to
\lstinline'from'
in
\lstinline'Knapsack2'
,
\lstinline'rw'
to
\lstinline'remainingWeight'
,
\lstinline'p'
to
\lstinline'profit'
, and
\lstinline'b'
to
\lstinline'selected'
. Field
\lstinline'tg'
remains the same.


\subsection{Class SearchFactory}

	%\begin{tabular}
	Class SearchFactory
\begin{verbatim}
         class SearchFactory {
         Stack prev=new Stack();
         Search make(int i, int rw, int p,
         BitSet b, TerminationGroup tg) {
         Search g=null;
         synchronized (this) {
         if (!prev.isEmpty()) g=(Search)prev.pop();
         }
         if (g==null) 
         return new Search(i,rw,p,b,tg);
         else {
         g.i=i; g.rw=rw; g.p=p; g.b=b; g.tg=tg;
         return g;
         }
         }
         synchronized void recycle(Search g) {
         if (prev!=null) prev.push(g);
         }        
         synchronized void terminate() {
         prev=null;
         }	
         }
\end{verbatim}

	
\lstinline'SearchFactory'
is a new class in
\lstinline'Knapsack3'
. Its purpose is to reuse
\lstinline'Search'
objects. When a new
\lstinline'Search'
object is needed,
\lstinline'SearchFactory'
's
\lstinline'make()'
method is called. It will try to allocate a
\lstinline'Search'
object off a local stack, or if the stack is empty, it will create a new one. When searches terminate, the objects call the
\lstinline'searchFactory'
's
\lstinline'recycle()'
method to push their objects on the stack. When the entire search is done, the algorithm calls
\lstinline'searchFactory'
's
\lstinline'terminate()'
method to dispose of all the
\lstinline'Search'
objects that are no longer needed.

The method
\lstinline'run()'
in class
\lstinline'Search'
(see
\ref{ParSubs.xml#id(26494)}[MISSING HREF]
) performs most of the functions of method
\lstinline'gen()'
in
\lstinline'Knapsack2'
. It searches the levels of the state-space tree nearest the root, calling method
\lstinline'dfs()'
to search the further reaches of the branches.
\ref{ParSubs.xml#id(94598)}[MISSING HREF]
shows this graphically, with solid lines indicating paths followed by
\lstinline'run()'
and the dotted edges representing runnables created to explore other branches. It has a number of features that need to be discussed:


\begin{enumerate}

\item{The for loop is used to optimize tail-end recursion. If the last thing a recursive method is going to do before returning is recursively call itself, it can save on overhead by assigning new values to its parameters and jumping back to its top. That's why the for loop is here.}

\end{enumerate}





\includegraphics{figures/ParSubs-11}

	%\begin{tabular}
	Method run() in class Search of Knapsack3
\begin{verbatim}
         public void run(){
         for(;;){
         if (p+rw*item[i].profitPerWeight<bestProfit)
         break;
         if (i>=LEVELS) {
         dfs(i,rw,p);
         break;
         }
         if (rw - item[i].weight >= 0) {
         // first, start zero's subtree
         b.clear(i);
         prq.run(searchFactory.make(i+1,rw,p,
         (BitSet)b.clone(),tg.fork()), 
         p+rw*item[i+1].profitPerWeight);
         // then iterate to search the one's subtree
         b.set(i);
         rw-=item[i].weight;
         p+=item[i].profit;
         ++i;
         } else { //just iterate to  search zero subtree
         b.clear(i);
         ++i;
         }
         }
         tg.terminate();
         searchFactory.recycle(this);
         }
\end{verbatim}

	
\begin{enumerate}

\item{The first if statement is to cut off the search if the best profit we can hope to achieve in this search is less than or equal to the best we've found so far.}

\item{The second if statement converts to a regular DFS if we have searched down enough levels in run() .}

\item{The third if statement checks to see if we have enough capacity to include the next item in the knapsack. If so, we create another Search object to explore the consequences of not including it. That search we schedule in the priority run queue prq associated with the upper bound on its profit as its priority. Then, we add in the profit and subtract the weight of the next item, increment the position i on the item array, and loop back around to continue the search.}

\item{The else part of the third if statement considers the case where we are not able to include this item in the knapsack. We simply skip it and loop back to continue the search.}

\item{The loop follows the greedy path through the tree, including the next most profitable item repeatedly until we are cut off or have reached the proper level to do a depth- first search, which we do by calling dfs() .}

\item{When we drop out of the loop, we are done with this Search object. We call terminate() on our termination group object tg to denote that we're done, and then we recycle our Search object. We only dare call recycle as the last action before returning, since our object could be reused immediately, which would clobber our fields.}

\end{enumerate}
The method
\lstinline'dfs()'
in
\lstinline'Knapsack3'
is the same as it is in
\lstinline'Knapsack2'
, except that occurrences of the identifier
\lstinline'Knapsack2'
are replaced with
\lstinline'Knapsack3'
, of course.


\subsection{Constructor}
The constructor for
\lstinline'Knapsack3'
(see
\ref{ParSubs.xml#id(37485)}[MISSING HREF]
) is similar to the constructor for
\lstinline'Knapsack2'
(
\ref{ParSubs.xml#id(66636)}[MISSING HREF]
). There are two major differences:


	%\begin{tabular}
	Constructor for Knapsack3
\begin{verbatim}
         public Knapsack3(int[] weights, int[] profits, int capacity,
         int DFSLEVELS){
         if (weights.length!=profits.length)
         throw new IllegalArgumentException(
         "0/1 Knapsack: differing numbers of"+
         " weights and profits");
         if (capacity<=0)
         throw new IllegalArgumentException(
         "0/1 Knapsack: capacity<=0");
         item = new Item[weights.length];
         int i;
         for (i=0; i<weights.length; i++) {
         item[i]=new Item();
         item[i].profit=profits[i];
         item[i].weight=weights[i];
         item[i].pos=i;
         item[i].profitPerWeight=
         ((float)profits[i])/weights[i];
         }
         int j;
         for (j=1; j<item.length; j++) {
         for (i=j; 
         i>0 && 
         item[i].profitPerWeight > 
         item[i-1].profitPerWeight;
         i--) {
         Item tmp=item[i];
         item[i]=item[i-1];
         item[i-1]=tmp;
         }
         }
         LEVELS=Math.max(1,item.length-DFSLEVELS);
         prq.setWaitTime(10000);
         prq.setMaxThreadsCreated(4);
         prq.run(
         searchFactory.make(0,capacity,0,
         new BitSet(item.length),tg),
         0);
         }
\end{verbatim}

	
\begin{enumerate}

\item{Knapsack3 takes the parameter DFSLEVELS rather than LEVELS . It indicates how many levels are desired in the final tree to be searched by method dfs() . The constructor has to compute LEVELS from that, which is simply item.lengthDFSLEVELS . If DFSLEVELS is larger than the number of items, it is set to one.}

\item{Rather than calling gen() , as Knapsack2 can do, Knapsack3 has to execute the run() method of a Search object. It does this by simply creating the object and dropping it into the priority run queue. It arbitrarily gives it the priority 0. Any priority would do. This is the first entry in the queue. No others will be added until it is run.}

\end{enumerate}

\subsubsection{A Purer Branch-and-Bound 0-1 Knapsack}

\lstinline'Knapsack3'
is not a pure implementation of branch-and-bound principles. Although the runnables created for other branches are scheduled in a priority run queue, the path that the
\lstinline'run()'
method takes through the tree is not rescheduled. According to branch-and-bound principles, it should be. It could, at some point, no longer have the highest possible profit. The branch-and-bound algorithm should always be following the most promising path.


\lstinline'Knapsack4'
is a purer branch-and-bound algorithm. The only difference from
\lstinline'Knapsack3'
is the
\lstinline'run()'
method in class
\lstinline'Search'
, shown in Example 5-23. The
\lstinline'for'
loop is no longer present. Instead of looping back to follow a path, it simply resubmits itself to the priority run queue. If it is still following the most promising path, it will be immediately removed from the run queue and executed. If some other path is more promising, that path will get the processor.


	%\begin{tabular}
	The method run() of Knapsack4 .
\begin{verbatim}
        public void run(){
        if (p+rw*item[i].profitPerWeight<bestProfit) {
        tg.terminate();
        searchFactory.recycle(this);
        return;
        }
        if (i>=LEVELS) {
        dfs(i,rw,p);
        tg.terminate();
        searchFactory.recycle(this);
        return;
        }
        if (rw - item[i].weight >= 0) {
        // first, start zero's subtree
        b.clear(i);
        prq.run(searchFactory.make(i+1,rw,p,
        (BitSet)b.clone(),tg.fork()), 
        p+rw*item[i+1].profitPerWeight);
        // then search the one's subtree
        b.set(i);
        rw-=item[i].weight;
        p+=item[i].profit;
        ++i;
        } else { //just search zero subtree
        b.clear(i);
        ++i; //gen(i+1,rw,p,b,tg);
        }
        prq.run(this,p+rw*item[i].profitPerWeight);
        return;
        }
\end{verbatim}

	It is still an impure branch-and-bound algorithm, however, since it switches to a depth-first search part of the way down the tree. The smaller a value of
\lstinline'DFSLEVELS'
you supply when you create the
\lstinline'Knapsack4'
object, the closer it is to a pure branch-and-bound algorithm.


\ref{ParSubs.xml#id(53326)}[MISSING HREF]
depicts the execution of
\lstinline'Knapsack4'
. The dotted lines in the figure indicate that the computation at the head of the line is submitted to the priority run queue for later execution.


\includegraphics{figures/ParSubs-12}

\section{Chapter Wrap-up}
In this chapter, we explored executing subroutines in parallel to gain parallelism. Throughout the chapter, we had to consider such issues as how to design or reorganize the algorithms for parallel execution, how to maintain a good grain size, and how to synchronize threads performing the calculations.

Although we could have created a new thread for each subroutine, we used the
\lstinline'RunQueue'
class to allocate threads. Instead of creating a new thread for each subroutine, a
\lstinline'Runnable'
object for the subroutine is placed in the run queue. When threads are finished running one runnable from the run queue, they recycle themselves and wait at the run queue for another runnable to be enqueued, which they remove and run.


\lstinline'RunQueue'
objects have parameters that control how many threads can be created to handle the runnables placed in the queue. This allows throttling the computation. Embarrassingly parallel computations that create a flood of subcomputations will not swamp the system with threads if the run queue restricts the number of threads that can be created. However, we must take care not to create a deadlock, which could happen if some of the runnables wait on calculations that later runnables in the run queue perform. Then later runnables would not be able to run, because the earlier runnables would be holding all the threads.

We presented the
\lstinline'Accumulator'
and
\lstinline'SharedTerminationGroup'
classes to detect the termination of the subcalculations. The
\lstinline'Accumulator'
class allows the results of the subcalculations to be combined by associative, commutative operators (e.g., be added up). The
\lstinline'SharedTerminationGroup'
does not help the subcalculations combine their results, but neither does it restrict the number of subcalculations to a fixed number. New members can be added to the group while it is running. Other threads can wait for the all the computations in the group to terminate.

We looked at two techniques for designing algorithms with parallel subroutines: divide and conquer and branch and bound. Branch and bound is used for combinatorial search. It gives precedence to the most promising search paths. For branch-and-bound algorithms, we introduced the
\lstinline'PriorityRunQueue'
class which gives the highest priority runnables to threads before the lower priority runnables. For this to be useful, the number of threads that the priority run queue can create must be limited.

Although we do not make much use of it in this book, factories with recycling, such as the
\lstinline'SearchFactory'
class, can significantly improve performance of systems that place runnables in run queues. It is clear when the runnables are about to terminate, they will no longer be needed, so it is safe to recycle them. Just as run queues save us the expense of creating threads, factories with recycling can further save us the expense of allocating and garbage collecting runnables.


\section{Exercises}
1. Try using a parallel depth-first search and a branch-and-bound search for the following two-processor scheduling problem:

There are
\emph{N}
jobs that are to be scheduled. The jobs can run on either of two unique processors, so the time a job takes on processor 1 will not necessarily be the same as on processor 2. Indeed, one job may take longer on processor 1; another may take longer on processor 2. Jobs may not be preempted or moved to another processor before completion. Once started on a processor, a job will run to completion.

You are given the times that each job will take on each processor (
\emph{tij}
, where
\emph{i}
=1,2 is the processor number and
\emph{j}
=1,...,
\emph{N}
is the job number).

A schedule for the jobs will assign each job to one of the processors. The completion time on a processor is the sum of the times for all the jobs scheduled on it. The completion time for the entire schedule is the maximum of the completion times for both processors.

Do a parallel depth-first search or a branch-and-bound algorithm to find the schedule that achieves the minimum completion time for the entire schedule.

2. Implement a mergesort. A mergesort works by partitioning the array to be sorted into two subarrays, recursively sorting them and merging the results into a single sorted array. A merge of two sorted arrays
\emph{A}
and
\emph{B}
, where
\emph{A}
is of length
\emph{M}
and
\emph{B}
is of length
\emph{N}
, copies the contents of
\emph{A}
and
\emph{B}
into the
\emph{M}
+
\emph{N}
element array
\emph{C}
. The merge repeatedly moves the smallest remaining element in either array
\emph{A}
or array
\emph{B}
into the next position of array
\emph{C}
. While elements remain in both
\emph{A}
and
\emph{B}
, the merge will have to compare two elements, the next elements in each, to see which is the smallest. As soon as one of the arrays is exhausted, all the remaining elements of the other may be copied without examination.

3. Write a parallel directory-search method. Given a directory
\lstinline'File'
object and a
\lstinline'String'
object, it will search that directory and all subdirectories recursively for all files whose names contain the specified string. Have it search the subdirectories concurrently. Does my \dollar\lbrack \rbrack \_ substitution work?


