\chapter{Shared Tables of Queues}

\begin{itemize}

\item{Shared Tables of Queues}

\item{Implementing Synchronizations Using a Shared Table of Queues}

\item{Indexed Keys}

\item{Implementing More Synchronizations and Shared Structures}

\item{Communicating through a Shared Table of Queues}

\item{Future Queues}

\item{Future Tables}

\end{itemize}
The
\texttt{SharedTableOfQueues}
class provides an associative data structure that facilitates coordination and communication among a group of threads and chores. It was inspired by the Linda project, whose tuple space is used to provide shared directories and queues. JavaSpaces is another knock-off of Linda.
\par
Shared tables of queues are a shared-memory data structure. When we show how to encode other synchronization operations using them, you may wonder why we bother, since we have other, more specialized and more efficient classes for the same purposes. One reason is that we will show a distributed-memory system based on shared table of queues. (See the section entitled ``
\ref{../../gkt/F/coordination.xml#id(63067)}[MISSING HREF]
.'') Many of the algorithms we show will work the same way in the distributed memory version.
\par

\section{Shared Tables of Queues}
The class
\texttt{SharedTableOfQueues}
combines directories and queues. Since both directories and queues are long known to be useful, it is no wonder that the combined data structure is versatile and convenient.
\par

\subsubsection{Methods}
A shared table of queues is like a hash table, but values put into the table are queued, rather than replacing the previous value.
\par

	%\begin{tabular}
	
\textbf{\texttt{SharedTableOfQueues}}
SharedTableOfQueues
	\begin{verbatim}
	


public 

SharedTableOfQueues

()

	\end{verbatim}
	
\texttt{public void}

\textbf{\texttt{put}}
put
\texttt{(Object key, Object value)}

\par

\texttt{public Object}

\textbf{\texttt{get}}
get
\texttt{(Object key)}

\par

\texttt{throws InterruptedException}

\par

\texttt{public Object}

\textbf{\texttt{look}}
look
\texttt{(Object key)}

\par

\texttt{throws InterruptedException}

\par

\texttt{public boolean}

\textbf{\texttt{isEmpty}}
isEmpty
\texttt{(Object key)}

\par

\texttt{public Object}

\textbf{\texttt{getFuture}}
getFuture
\texttt{(Object key)}

\par

\texttt{public Object}

\textbf{\texttt{lookFuture}}
lookFuture
\texttt{(Object key)}

\par

\texttt{public Object}

\textbf{\texttt{getSkip}}
getSkip
\texttt{(Object key)}

\par

\texttt{public Object}

\textbf{\texttt{lookSkip}}
lookSkip
\texttt{(Object key)}

\par

\texttt{public void}

\textbf{\texttt{runDelayed}}
runDelayed
\texttt{(Object key, Runnable r)}

\par

	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	Queues are automatically created in the table when a value is put in. The names of the queues can be any object, just as in hash tables. The queues are also automatically created when a thread or chore waits for a value to be placed in it. Only objects, or null, can be placed in the queue, not primitive types. Placing null in the queue can cause some confusion, as we describe for methods
\texttt{getSkip()}
and
\texttt{lookSkip()}
.
\par
When there are no more items in the queue and no threads or chores are waiting for a value to be placed in the queue, the queue is removed, since if it is needed later, it will be created in exactly the state it was in when it was removed.
\par
The
\texttt{get(key)}
and
\texttt{look(key)}
methods are used to look up values in the queues. The
\texttt{get()}
method removes and returns the first value in the queue.The
\texttt{look()}
method returns a reference to the first value without removing it. Both
\texttt{look()}
and
\texttt{get()}
are blocking: They will wait for the queue to be nonempty before returning.
\par
Methods
\texttt{getSkip()}
and
\texttt{lookSkip()}
are versions of
\texttt{look}
and
\texttt{get}
that return immediately, yielding the value they found if they were successful or null if the queue was empty.
\par
Methods
\texttt{getFuture()}
and
\texttt{lookFuture()}
do not return the values in the queue, but rather futures containing those values. If the value is already present, the future will be returned with the value already assigned to it. If the value is not already present, the future will initially be unassigned, but will be assigned the value when it is put in the queue later.
\par
Finally, there is a
\texttt{runDelayed()}
method that leaves a runnable object to be scheduled when the queue becomes nonempty.
\par
Here are the methods in more detail:
\par

\subsection{Constructor}
There is a single parameterless constructor,
\texttt{SharedTableOfQueues()}
.
\par

\subsection{put()}
Method
\texttt{put(Object key, Object value)}
puts an object or null
\texttt{value}
into the queue with name
\texttt{key}
. It creates a queue with name
\texttt{key}
if one is not already present.
\par

\subsection{get()}
Method
\texttt{get(Object key)}
removes and returns a reference from the queue named
\texttt{key}
. If the queue is empty,
\texttt{get()}
waits for something to be put into the queue. It throws
\texttt{InterruptedException}
if the thread was interrupted while waiting.
\par

\subsection{look()}
Method
\texttt{look(Object key)}
returns a reference to the first object in the queue named
\texttt{key}
. If the first item in the queue is null, of course, look returns that. If the queue is empty,
\texttt{look()}
waits for something to be put into the queue. The object is not removed from the queue. It throws
\texttt{InterruptedException}
if the thread was interrupted while waiting.
\par

\subsection{isEmpty()}
Method
\texttt{isEmpty(Object key)}
returns true if there is no queue in the shared table of queues with the name
\texttt{key}
, or if there is, the queue has no items queued in it.
\par

\subsection{getFuture()}
Method
\texttt{getFuture(Object key)}
immediately returns a future. The future will contain the first object (or null) in the queue. If the queue is empty,
\texttt{getFuture()}
will return the future anyway, and the value will be assigned to that future when some value is enqueued later. Because this is a
\emph{get}
method, the value in the future is removed from the queue.
\par

\subsection{lookFuture()}
Method
\texttt{lookFuture(Object key)}
returns the future that is to contain the first value in the queue. The future is returned immediately. If the queue isn't empty, the future will have the value already in it. If the queue is empty, the future will be assigned when some value is placed in the queue later.
\par

\subsection{getSkip()}
Method
\texttt{getSkip(Object key)}
removes and returns the first value from the queue named
\texttt{key}
. If the queue is empty, it returns null. In any event, it returns immediately.
\par

\subsection{lookSkip()}
Method
\texttt{lookSkip(Object key)}
returns the first value in the queue named
\texttt{key}
. If the queue is empty, it returns null immediately. The value is not removed from the queue.
\par

\subsection{runDelayed()}
Method
\texttt{runDelayed(Object key, Runnable r)}
places the
\texttt{r}
unnable object
\texttt{r}
in a run queue as soon as the queue named
\texttt{key}
is nonempty. Method
\texttt{runDelayed()}
returns immediately.
\par

\section{Implementing Synchronizations Using a Shared Table of Queues}
We will show in this section how shared tables of queues can be used to implement a wide variety of concurrent programming patterns: locked objects, semaphores, queues, bounded buffers, reactive objects, barriers, and
\emph{I}
-structures. The number of uses is not surprising; both directories (tables) and queues are widely used in programming systems.
\par

\subsection{Named Futures}
A shared table of queues allows you to create named futures. The important instruction to use is
\par

\texttt{Future f=stoq.lookFuture(name)}
where name can be any object. All threads that look up the future with the same name will get a reference to the same future. The future can be assigned a value
\texttt{v}
directly, by calling its
\texttt{setValue()}
method, as in
\par

\texttt{f.setValue(v)}
or indirectly by calling
\par

\texttt{stoq.put(name,v)}
There are three major dangers in using a shared table of queues to get named futures:
\par

\begin{enumerate}

\item{Be sure that you call lookFuture() rather than getFuture() . Method getFuture() returns a different future each call.}

\item{Be sure, of course, that you are not using the same name in the table for any other purpose.}

\item{If the futures are not removed from the table and if the whole table is not discarded, the futures will not be garbage collected.}

\end{enumerate}
As an alternative, you could just use the queue itself as a future with the following equivalences:
\par

	%\begin{tabular}
	
\texttt{f.getValue()}

\texttt{stoq.look(n)}

\texttt{f.setValue(v)}

\texttt{stoq.put(n,v)}

\texttt{f.isSet()}

\texttt{!stoq.isEmpty(n)}

	%\end{tabular}
	Here,
\texttt{f}
is a future and
\texttt{n}
is the name of the simulated future.
\par
If all you want are named futures, you may prefer to use the
\texttt{FutureTable}
class, rather than using a shared table of queues.
\texttt{FutureTable}
is presented in the section entitled ``
\ref{id(56570)}[MISSING HREF]
.''
\par

\subsection{Named, Locked Records}
There are several ways to used a shared table of queues to share named, locked records among threads.
\par

\textbf{Technique 1.}
The easiest way is to use the shared table of queues as a directory. Simply place the shared record in the table with
\par

\texttt{stoq.put(name,record)}
When threads need to use it, they call
\par

\texttt{Record rec=(Record)stoq.look(name);}

\texttt{synchronized (rec) \lbrack  operate on record \rbrack }

\textbf{Technique 2.}
A way to lock records that is more in the spirit of the shared table of queues is
\par

\texttt{Record rec=(Record)stoq.get(name);}

\texttt{\lbrack /*operate on record*/\rbrack }

\texttt{stoq.put(name,rec);}
This second form removes the record from the table while it is being operated on. Other threads have to wait at their
\texttt{get()}
calls until the record is replaced before they can access it.
\par

\par

\subsection{Named Locks}
Technique 2 for implementing locked records can also be used for named locks. Simply keep one object in the shared table under the name. As long as the object is in the table, the lock is unlocked. To lock the name, get the object from the table. To unlock it, put an object back in.
\par
To initialize the lock, use
\par

\texttt{stoq.put(name,''X'');}
To lock the name, use
\par

\texttt{stoq.get(name);}
To unlock the name, use
\par

\texttt{stoq.put(name,''X'');}

\subsection{Named Semaphores}
Named semaphores can work the same way as locks, which are simply binary semaphores. Initialization and the down and up operations are simply coded.
\par
To initialize the semaphore to a count
\emph{n,}
use
\par

\texttt{for(int initCount=n;initCount>0;initCount--)}

\texttt{stoq.put(name,''X'');}
To down the semaphore, use
\par

\texttt{stoq.get(name);}
To up the semaphore, use
\par

\texttt{stoq.put(name,''X'');}

\subsection{Named Queues}
It is almost too obvious to state, but the following are queue operations and their equivalents.
\par

	%\begin{tabular}
	
\texttt{q.get()}

\texttt{stoq.get(n)}

\texttt{q.put(v)}

\texttt{stoq.put(n,v)}

\texttt{q.isEmpty()}

\texttt{stoq.isEmpty(n)}

	%\end{tabular}
	Here,
\texttt{q}
is a queue and
\texttt{n}
is the name of a simulated queue.
\par
On a concurrent system, queues of this sort are dangerous. The producer can run arbitrarily far ahead of the consumer, exhausting the memory. It is better to use bounded buffers, which we will show below.
\par

\section{Indexed Keys}
Although any object can be used as a key in a shared table of queues, the
\texttt{IndexedKey}
class provides convenient keys for some of our purposes. An indexed key has two fields:
\texttt{id}
and
\texttt{x}
. Field
\texttt{id}
is in effect a name, and
\texttt{x}
is an index. Keys with the same
\texttt{id}
can be considered parts of the same data object, the parts of that object differing in their
\texttt{x}
field.
\par

	%\begin{tabular}
	
\textbf{\texttt{IndexedKey}}
IndexedKey
	\begin{verbatim}
	


 private int id;  private long x;

	\end{verbatim}
	
	\begin{verbatim}
	


 public static IndexedKey make(int id,long x)

	\end{verbatim}
	
	\begin{verbatim}
	


 public static IndexedKey unique(long x)

	\end{verbatim}
	
	\begin{verbatim}
	


 public int getId()

	\end{verbatim}
	
	\begin{verbatim}
	


 public long getX()

	\end{verbatim}
	
	\begin{verbatim}
	


 public IndexedKey at(long x)

	\end{verbatim}
	
	\begin{verbatim}
	


 public IndexedKey add(long x)

	\end{verbatim}
	
	\begin{verbatim}
	


 public boolean equals(Object o)

	\end{verbatim}
	
	\begin{verbatim}
	


 public int hashCode()

	\end{verbatim}
	
	\begin{verbatim}
	


 public String toString()

	\end{verbatim}
	
	%\end{tabular}
	The methods of
\texttt{IndexedKey}
are shown in the box.
\texttt{IndexedKey}
has no public constructors, but it does provide a number of factory methods. Static method
\texttt{make(id,x)}
creates an indexed key with the specified
\texttt{id}
and
\texttt{x}
fields.
\par
Static method
\texttt{unique(x)}
creates an indexed key with a unique
\texttt{id}
field and the specified
\texttt{x}
field. The
\texttt{id}
fields are generated by a random-number generator and will not repeat unless you rapidly generate unique indexed keys for a long time. Moreover, the
\texttt{id}
fields generated will not lie in the range of values for short integers. If you restrict the
\texttt{id}
fields you pass to
\texttt{make()}
to shorts, unique keys will not collide with them either.
\par
Calling
\texttt{k.at(x)}
creates another
\texttt{IndexedKey}
object with the same
\texttt{id}
as
\texttt{k}
, but with its
\texttt{x}
field replaced. It is equivalent to
\texttt{IndexedKey.make(k.getId(),x)}
. Calling
\texttt{k.add(x)}
adds to the
\texttt{x}
field. It is equivalent to
\texttt{IndexedKey.make(k.getId(), k.getX() + x)}
.
\par
There is no way to change the contents of an indexed key. Since indexed keys are designed to be used as keys in tables, the ability to change them would not be safe. The
\texttt{id}
and
\texttt{x}
fields are hashed to place the key in the hash table. If the fields change, the key couldn't be found again.
\par
Two indexed keys with equal
\texttt{id}
and
\texttt{x}
fields are reported equal by
\texttt{equals()}
. Method
\texttt{hashCode()}
, of course, is needed to look up the indexed keys in the hash table that a shared table of queues uses.
\par
The code shown in
\ref{id(43340)}[MISSING HREF]
, has several points of interest.
\par

	%\begin{tabular}
	Class IndexedKey
	\begin{verbatim}
	

import java.util.Random;
	\end{verbatim}
	
	\begin{verbatim}
	

class IndexedKey {
	\end{verbatim}
	
	\begin{verbatim}
	

 private static Random rand=new Random();
	\end{verbatim}
	
	\begin{verbatim}
	

 private static Random hasher=new Random();
	\end{verbatim}
	
	\begin{verbatim}
	

 private int id;  private long x;
	\end{verbatim}
	
	\begin{verbatim}
	

 private IndexedKey(long x){
	\end{verbatim}
	
	\begin{verbatim}
	

	synchronized (rand) {
	\end{verbatim}
	
	\begin{verbatim}
	

	    for (id=rand.nextInt();
	\end{verbatim}
	
	\begin{verbatim}
	

		    id < Short.MIN_VALUE AND Short.MAX_VALUE < id; 
	\end{verbatim}
	
	\begin{verbatim}
	

		    id=rand.nextInt());
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	

	this.x=x;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private IndexedKey(int id, long x){this.id=id; this.x=x;}
	\end{verbatim}
	
	\begin{verbatim}
	

 public static IndexedKey unique(long x){
	\end{verbatim}
	
	\begin{verbatim}
	

	return new IndexedKey(x);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public static IndexedKey make(int id,long x){
	\end{verbatim}
	
	\begin{verbatim}
	

	return new IndexedKey(id,x);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public int getId(){return id;}
	\end{verbatim}
	
	\begin{verbatim}
	

 public long getX(){return x;}
	\end{verbatim}
	
	\begin{verbatim}
	

 public IndexedKey at(long x){
	\end{verbatim}
	
	\begin{verbatim}
	

	return new IndexedKey(id,x);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public IndexedKey add(long x){
	\end{verbatim}
	
	\begin{verbatim}
	

	return new IndexedKey(id,this.x+x);
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public boolean equals(Object o){
	\end{verbatim}
	
	\begin{verbatim}
	

	if (o instanceof IndexedKey) {
	\end{verbatim}
	
	\begin{verbatim}
	

		IndexedKey k=(IndexedKey)o;
	\end{verbatim}
	
	\begin{verbatim}
	

		return id == k.id AND  x == k.x;
	\end{verbatim}
	
	\begin{verbatim}
	

 	} else return false;
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	\begin{verbatim}
	

 public int hashCode(){
	\end{verbatim}
	
	\begin{verbatim}
	

    synchronized(hasher) {
	\end{verbatim}
	
	\begin{verbatim}
	

	    hasher.setSeed(id+x);
	\end{verbatim}
	
	\begin{verbatim}
	

	    hasher.nextInt();
	\end{verbatim}
	
	\begin{verbatim}
	

	    return hasher.nextInt();
	\end{verbatim}
	
	\begin{verbatim}
	

    }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public String toString(){
	\end{verbatim}
	
	\begin{verbatim}
	

	return "IndexedKey("+id+","+x+")";
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	The implementation of
\texttt{IndexedKey}
uses two random-number generators from class java.util.Random. The generator
\texttt{rand}
is used to generate the unique symbols. The generator
\texttt{hasher}
is used to generate hash codes for indexed keys. Since these are static and must be shared among all indexed keys, they must be locked while in use.
\texttt{Hasher}
is used to hash the
\texttt{id}
and
\texttt{x}
fields as follows: The sum of the
\texttt{id}
and
\texttt{x}
fields is fed into
\texttt{hasher}
as a new seed; then, the second random integer from the series is used as the hash value. Actually, the first random number from the series ought to be sufficiently random, but visual inspection led us to suspect that it was not.
\par

\section{Implementing More Synchronizations and Shared Structures}
Now we will consider some more shared data structures and synchronization objects built on a shared table of queues that use
\texttt{IndexedKey}
in their implementation.
\par

\subsection{Bounded buffers}
Since a shared table of queues contains queues, half of a bounded buffer is already provided. The only problem is to restrict the number of items that can be placed in a queue. For this, we can use another queue containing arbitrary tokens that represent available slots in the queue of items. The code is shown in
\ref{id(10337)}[MISSING HREF]
. We use two indexed keys as names of the queues. These keys were generated by
\texttt{IndexedKey.unique()}
. They differ from each other in the
\texttt{x}
field. The name
\texttt{fulls}
is used to access the queue of values. The queue named
\texttt{empties}
holds arbitrary tokens--actually, it holds string objects ``X''. To put something into the queue, a thread must first remove a token from the
\texttt{empties}
queue, which indicates that it has acquired an empty slot in the queue. When a thread gets a value out of the queue, it puts a token back into
\texttt{empties}
, allowing another item to be placed in the queue.
\par
You might consider this to be a combination of a queue and a semaphore.
\par

	%\begin{tabular}
	Bounded buffer using a shared table of queues
	\begin{verbatim}
	

class BBuffer {
	\end{verbatim}
	
	\begin{verbatim}
	

 private IndexedKey fulls=IndexedKey.unique(0);
	\end{verbatim}
	
	\begin{verbatim}
	

 private IndexedKey empties=fulls.at(1);
	\end{verbatim}
	
	\begin{verbatim}
	

 private SharedTableOfQueues stoq=new SharedTableOfQueues();
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

 public BBuffer(int num){
	\end{verbatim}
	
	\begin{verbatim}
	

	for (int i=num;i>0;i--)stoq.put(empties,"X");
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public void put(Object x){
	\end{verbatim}
	
	\begin{verbatim}
	

    try {
	\end{verbatim}
	
	\begin{verbatim}
	

	    stoq.get(empties);
	\end{verbatim}
	
	\begin{verbatim}
	

	    stoq.put(fulls,x);
	\end{verbatim}
	
	\begin{verbatim}
	

    } catch (InterruptedException e){}
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public Object get(){
	\end{verbatim}
	
	\begin{verbatim}
	

    Object x=null;
	\end{verbatim}
	
	\begin{verbatim}
	

    try {
	\end{verbatim}
	
	\begin{verbatim}
	

	    x=stoq.get(fulls);
	\end{verbatim}
	
	\begin{verbatim}
	

	    stoq.put(empties,"X");
	\end{verbatim}
	
	\begin{verbatim}
	

    } catch (InterruptedException e){}
	\end{verbatim}
	
	\begin{verbatim}
	

    return x;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
\subsection{I -structures}
One of the big conceptual problems in dataflow is dealing with structured objects--arrays and records. One conceives of objects flowing among instructions in tokens, but arrays are too large to move around. Dataflow machines have had to resort to structure stores to hold structured objects, but these must be made compatible with the dataflow single-assignment principle.
\par
The researcher Arvind named these structures in dataflow machines I-structures, or incremental structures. The structures are not present all at once. The components are assigned values over time, and each component is assigned at most once. The structure grows incrementally. Attempts to fetch a component of an
\emph{I}
-structure must wait until that component has been assigned a value.
\par
For macrodataflow, an
\emph{I}
-structure can be composed of futures. With shared tables of queues and indexed keys, an
\emph{I}
-structure can be an associative array of futures.
\par

\subsection{Barriers}
An implementation of a barrier is shown in
\ref{id(89622)}[MISSING HREF]
. The idea is that a barrier has to keep track of both how many threads still have to gather at it and the total number of threads that will be gathering at it. As threads gather, they decrement the count of remaining threads. When the count goes to zero, it is reset to the total number of synchronizing threads. The threads must have mutual exclusion when manipulating the
\texttt{remaining}
count. All threads except the last to gather at a barrier must wait for the last thread to arrive.
\par
We have implemented this barrier so that threads have to register with it. When they call
\texttt{register()}
, they get handles on the barrier, which are of class
\texttt{BarrierTQ.Handle}
. The intent of this is to show how the threads can synchronize without using any shared object, but the shared table of queues and the data it contains. This is a model for how distributed programs can communicate through a shared table of queues on a remote machine. We will show such an implementation,
\texttt{Memo}
, later in: ``
\ref{../../gkt/F/coordination.xml#id(63067)}[MISSING HREF]
.''
\par

	%\begin{tabular}
	Barrier using a shared table of queues
	\begin{verbatim}
	

class BarrierTQ {
	\end{verbatim}
	
	\begin{verbatim}
	

 private IndexedKey initialKey=IndexedKey.unique(0);
	\end{verbatim}
	
	\begin{verbatim}
	

 private SharedTableOfQueues stoq=new SharedTableOfQueues();
	\end{verbatim}
	
	\begin{verbatim}
	

 private int stillToRegister;
	\end{verbatim}
	
	\begin{verbatim}
	

 private class X{
	\end{verbatim}
	
	\begin{verbatim}
	

	public int remaining,count;
	\end{verbatim}
	
	\begin{verbatim}
	

	X(int c){remaining=count=c;}
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public BarrierTQ(int num){
	\end{verbatim}
	
	\begin{verbatim}
	

	stillToRegister = num;
	\end{verbatim}
	
	\begin{verbatim}
	

	stoq.put(initialKey,new X(num));
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public class Handle{
	\end{verbatim}
	
	\begin{verbatim}
	

    private IndexedKey current=initialKey;
	\end{verbatim}
	
	\begin{verbatim}
	

  public void gather(){
	\end{verbatim}
	
	\begin{verbatim}
	

    try {
	\end{verbatim}
	
	\begin{verbatim}
	

	X x=(X)stoq.get(current);
	\end{verbatim}
	
	\begin{verbatim}
	

	x.remaining--;
	\end{verbatim}
	
	\begin{verbatim}
	

	if (x.remaining==0) {
	\end{verbatim}
	
	\begin{verbatim}
	

		x.remaining=x.count;
	\end{verbatim}
	
	\begin{verbatim}
	

		current=current.add(1);
	\end{verbatim}
	
	\begin{verbatim}
	

		stoq.put(current,x);
	\end{verbatim}
	
	\begin{verbatim}
	

	} else {
	\end{verbatim}
	
	\begin{verbatim}
	

		stoq.put(current,x);
	\end{verbatim}
	
	\begin{verbatim}
	

		current=current.add(1);
	\end{verbatim}
	
	\begin{verbatim}
	

		stoq.look(current);
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	

   } catch (InterruptedException e){}
	\end{verbatim}
	
	\begin{verbatim}
	

  }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	\begin{verbatim}
	

 public Handle register() {
	\end{verbatim}
	
	\begin{verbatim}
	

	if (stillToRegister-- > 0) return new Handle();
	\end{verbatim}
	
	\begin{verbatim}
	

	else throw new IllegalStateException();
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 public String toString(){
	\end{verbatim}
	
	\begin{verbatim}
	

	return "BarrierTQ("+initialKey.getId()+")";
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	%\end{tabular}
	We use a shared, named, locked record to keep the total count of threads and the remaining count. We remove it to examine and change the remaining count. All threads except the last replace it in the shared table of queues.
\par
The threads that must delay until the last one gathers delay by calling
\texttt{look()}
. The value they are looking for will be placed in the table by the last thread to arrive at the
\texttt{gather()}
.
\par
The tricky part of the implementation is to use the record holding the count and remaining fields as the item that the waiting threads look for to continue after a gather. This record will be moved through a series of queues. The
\texttt{x}
field of this key constitutes a step number. The record is placed in a queue at the beginning of a step. At the end of a step, it is removed from one queue and placed in the next.
\par
All the threads will use one value of their indexed key when they gather. They remove the record there, decrement the remaining count, and if the count isn't zero, they replace the record in the queue from which they retrieved it. Then, they look for a record to show up in the queue whose name is one larger in the index field. There is no record there already, so they wait. When the last thread gathers, it resets the remaining count, but does not replace the thread in the queue it got it from. Instead, it places it in the queue named with an
\texttt{x}
field one larger than before. When the other threads see this record, they know that all threads have gathered, so they can proceed.
\par

\section{Reactive Objects}
Reactive objects, or Actors, provide a way of thinking about massively parallel computation. The idea is that each component of a data structure is an active, programmed entity. These objects can run in parallel, communicating by message passing. They can be used to implement data-parallel algorithms (i.e., algorithms that operate on all the components of data structures in parallel).
\par
A reactive object is an object that receives messages and responds to them. It is only active when it has messages to process; it reacts to messages. In several of the reactive object programming systems, the code for a reactive resembles the code for a chore; it is entered at the beginning for each message it receives, like a chore being reentered at the beginning of its
\texttt{run()}
method.
\par
One of the inspirations for reactive objects was to program massively parallel computers composed of hundreds of thousands of tiny computers. Dozens of the tiny computers could be placed on the same chip. Although each of the tiny computers would have a slow processor and little memory, each of them could be running in parallel, and the very number of them would make computations run blitzingly fast. It was envisioned that the programs on the tiny computers could be reactive objects.
\par
It turned out that the idea didn't work for a number of reasons, but as is the case with dataflow, building massive systems of reactive objects is a programming technique that can sometimes be useful.
\par
Each object in a reactive-object system has a mailbox for receiving messages and a program there that receives the messages and does the computing. The address, or name, of the object is the address of its mailbox as far as the system is concerned.
\par
In a pure reactive object system, the objects only communicate by sending messages. There is no direct way to call a method in another object. Rather, the caller sends a message and then receives a reply message.
\par
There are a number of ways to implement reactive-object systems using a shared table of queues. In all of them, the mailboxes are queues in the shared table. We will call the shared table
\texttt{mailboxes}
in the following examples. The name of the reactive object is the key that maps to its mailbox queue.
\par
1. Each reactive object can be a thread. The object itself is the key that maps to its mailbox.
\par
A reactive object waits for mail by executing
\par

\texttt{Object msg = mailboxes.get(this)}
That is, it looks itself up in the mailboxes to find its message queue.
\par
To send a reactive object
\texttt{ro}
and a message, simply call
\par

\texttt{mailboxes.put(ro,message)}
2. Reactive objects may easily be decoupled from their names. You can use a
\texttt{String}
object to give a descriptive name to an object, or use an indexed key, which allows you to create unique new names whenever you wish. The object now needs to know its name explicitly, changing the message reception to
\par

\texttt{Object msg = mailboxes.get(myName)}
There is an advantage to using indexed keys for names. Reactive objects have been proposed for array processing, with each array element, row, or block, being a reactive object. In theory, you should be able to get large amounts of parallelism. All the components of an array could be sent messages telling them to negate themselves, take their absolute values, or send themselves to the corresponding blocks of another array to be added.
\par
But many reactive-object systems have referenced objects by pointer only. Each element of an array has its individual name, and knowing the name of one array element will not tell you the name of any other. That tends to mean that arrays have to be implemented as linked structures. A two-dimensional array could be implemented as a binary search tree with the two indices as the key. The nodes of the trees would themselves be reactive objects, so to send a message to one array element, you would send the message to the root of the tree, which would pass it to its left or right child, which would do the same until the message got to the correct node in the tree and the correct array element.
\par
Alternatively, a two-dimensional array could be implemented as doubly-linked lists of the elements in its rows and columns, with the array's address being the address of its upper left element. Again, to get a message to a particular array element, you would first send the message to a single element, the upper left element.
\par
It was very difficult to write array algorithms without flooding the entry node of one array or another with messages, causing a bottleneck. Actually, it is worse than a bottleneck for those machines with a massive number of small processors, as the nodes would run out of queue space and crash.
\par
With indexed keys, the elements of the array can be addressed directly, so there is less likelihood of flooding single reactive objects with messages.
\par
3. Reactive-object systems are often designed with the idea that there will be huge numbers of objects. It would be overly expensive if all of these objects were threads. It is desirable to implement them as chores.
\par
With chores, the code for receiving a message becomes something more like the following:
\par

\texttt{public void run() \lbrack }

\texttt{Object msg = mailboxes.getSkip(myName)}
if (msg==null) \lbrack mailboxes.runDelayed(
\texttt{myName,}
this);return;\rbrack ...Here it is assumed that messages are never null. As is usual with chores, and with reactive objects for that matter, the object receives a message at the start of its
\texttt{run()}
method.
\par
If we are going to write code in the style of reactive-object systems that only are dispatched when there is a message present, our code could be as follows:
\par

\texttt{public void run() \lbrack }

\texttt{Object msg = mailboxes.get(myName)}
...mailboxes.runDelayed(
\texttt{myName,}
this);\rbrack If the reactive object is run delayed on its message queue when it is first created, this code should work fine.
\par
4. An advantage of implementing reactive objects as chores communicating through shared tables of queues over traditional reactive-object designs is that our objects can have more than one message queue. Traditionally, reactive objects have only one queue, and they must process the messages in the queue in first-come, first-served order. However, object A may wish to use object B as a subroutine. So, suppose A is receiving requests for service and calls object B as a subroutine by sending B a message. Object B can only return by sending messages back to A. So object A receives a request, sends a subroutine call message to object B, and must receive the response from B before it can respond to the request. But, suppose the first object receives another request for service before it receives the response to its first subroutine call. It can't respond to its first call until it receives a later message, but it must handle messages FIFO, so it must start processing another request before responding to the previous one. This is typically handled by enqueueing the service requests in an external queue. The external queue is itself a linked list of reactive objects. This certainly complicates the coding.
\par
With multiple queues, object A can request a response from object B in a second message queue. It waits for the response in that queue before going back to the first queue to handle more requests.
\par

\section{Communicating through a Shared Table of Queues}
A shared table of queues makes it easy to set up communications between threads. Both threads can compute names individually and then communicate through the objects at those named slots in the table.
\par

	%\begin{tabular}
	Warshall's Algorithm.
	\begin{verbatim}
	

for (k=0; k<N; k++)
	\end{verbatim}
	
	\begin{verbatim}
	

    for (i=0; i<N; i++)
	\end{verbatim}
	
	\begin{verbatim}
	

         for (j=0; j<N; j++)
	\end{verbatim}
	
	\begin{verbatim}
	

                A[i][j] = A[i][j] || (A[i][k] AND A[k][j]) ;
	\end{verbatim}
	
	%\end{tabular}
	
	%\begin{tabular}
	Structure of Warshall's algorithm using a shared table of queues
	\begin{verbatim}
	

class WarshallTQ{
	\end{verbatim}
	
	\begin{verbatim}
	

 int blkSize;
	\end{verbatim}
	
	\begin{verbatim}
	

 public WarshallTQ(int blkSize){
	\end{verbatim}
	
	\begin{verbatim}
	

	this.blkSize=blkSize;
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	

 private class Block extends Thread {
	\end{verbatim}
	
	\begin{verbatim}
	

	Block(...){...}
	\end{verbatim}
	
	\begin{verbatim}
	

	public void run(){...}
	\end{verbatim}
	
	\begin{verbatim}
	

 public void closure(boolean[][] a) {...}
	\end{verbatim}
	
	\begin{verbatim}
	

} 
	\end{verbatim}
	
	%\end{tabular}
	
	%\begin{tabular}
	Method closure()
	\begin{verbatim}
	

 public void closure(boolean[][] a) {
	\end{verbatim}
	
	\begin{verbatim}
	

	int i,j,NR,NC;
	\end{verbatim}
	
	\begin{verbatim}
	

	SharedTableOfQueues tbl=new SharedTableOfQueues();
	\end{verbatim}
	
	\begin{verbatim}
	

	IndexedKey kthRows=IndexedKey.unique(0);
	\end{verbatim}
	
	\begin{verbatim}
	

	IndexedKey kthCols=IndexedKey.unique(0);
	\end{verbatim}
	
	\begin{verbatim}
	

	NR=a.length;
	\end{verbatim}
	
	\begin{verbatim}
	

	NC=a[0].length;
	\end{verbatim}
	
	\begin{verbatim}
	

	int nt=((NR+blkSize-1)/blkSize)*((NC+blkSize-1)/blkSize);
	\end{verbatim}
	
	\begin{verbatim}
	

	Accumulator done=new Accumulator(nt);
	\end{verbatim}
	
	\begin{verbatim}
	

	for (i=0;i<NR;i+=blkSize) 
	\end{verbatim}
	
	\begin{verbatim}
	

	    for (j=0;j<NC;j+=blkSize){
	\end{verbatim}
	
	\begin{verbatim}
	

		    new Block(a,i,j,tbl,
	\end{verbatim}
	
	\begin{verbatim}
	

                kthRows,kthCols,done).start();
	\end{verbatim}
	
	\begin{verbatim}
	

	    }
	\end{verbatim}
	
	\begin{verbatim}
	

	try {
	\end{verbatim}
	
	\begin{verbatim}
	

	    done.getFuture().getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

	} catch (InterruptedException ex){}
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
	%\begin{tabular}
	Class Block
	\begin{verbatim}
	

 private class Block extends Thread {
	\end{verbatim}
	
	\begin{verbatim}
	

	 boolean[][] a;
	\end{verbatim}
	
	\begin{verbatim}
	

	boolean[][] block; 
	\end{verbatim}
	
	\begin{verbatim}
	

	int r,c; //upperleft
	\end{verbatim}
	
	\begin{verbatim}
	

	int nr,nc;
	\end{verbatim}
	
	\begin{verbatim}
	

	int N;
	\end{verbatim}
	
	\begin{verbatim}
	

	SharedTableOfQueues tbl;
	\end{verbatim}
	
	\begin{verbatim}
	

	IndexedKey rows, cols; 
	\end{verbatim}
	
	\begin{verbatim}
	

	Accumulator done;
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

	Block( boolean[][] a, 
	\end{verbatim}
	
	\begin{verbatim}
	

	     int r, int c,
	\end{verbatim}
	
	\begin{verbatim}
	

	     SharedTableOfQueues tbl,
	\end{verbatim}
	
	\begin{verbatim}
	

	     IndexedKey rows, 
	\end{verbatim}
	
	\begin{verbatim}
	

	     IndexedKey cols, 
	\end{verbatim}
	
	\begin{verbatim}
	

	     Accumulator done){
	\end{verbatim}
	
	\begin{verbatim}
	

		this.a=a;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.r=r;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.c=c;
	\end{verbatim}
	
	\begin{verbatim}
	

		N = a.length;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.tbl=tbl;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.rows=rows;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.cols=cols;
	\end{verbatim}
	
	\begin{verbatim}
	

		this.done=done;
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	

	public void run(){ ... }
	\end{verbatim}
	
	\begin{verbatim}
	

 }
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
	%\begin{tabular}
	Method run() from class Block
	\begin{verbatim}
	

	public void run(){
	\end{verbatim}
	
	\begin{verbatim}
	

	  int i,j;
	\end{verbatim}
	
	\begin{verbatim}
	

	  int k;
	\end{verbatim}
	
	\begin{verbatim}
	

	  boolean IHaveRow,IHaveColumn;
	\end{verbatim}
	
	\begin{verbatim}
	

	  boolean[] row=null, col=null;
	\end{verbatim}
	
	\begin{verbatim}
	

	  nr=Math.min(blkSize,a.length-r);
	\end{verbatim}
	
	\begin{verbatim}
	

	  nc=Math.min(blkSize,a[0].length-c);
	\end{verbatim}
	
	\begin{verbatim}
	

	  this.block=new boolean[nr][nc];
	\end{verbatim}
	
	\begin{verbatim}
	

	  for (i=0;i<nr;i++)
	\end{verbatim}
	
	\begin{verbatim}
	

		 for (j=0;j<nc;j++)
	\end{verbatim}
	
	\begin{verbatim}
	

		    block[i][j]=a[r+i][c+j];
	\end{verbatim}
	
	\begin{verbatim}
	

	  try {
	\end{verbatim}
	
	\begin{verbatim}
	

		for (k=0;k<N;k++) {
	\end{verbatim}
	
	\begin{verbatim}
	

		  IHaveRow = k-r>=0 AND k-r < nr;		
	\end{verbatim}
	
	\begin{verbatim}
	

		  IHaveColumn = k-c>=0 AND k-c < nc;
	\end{verbatim}
	
	\begin{verbatim}
	

		  if (IHaveRow) {
	\end{verbatim}
	
	\begin{verbatim}
	

			tbl.put(rows.at(k+c*N),block[k-r].clone());
	\end{verbatim}
	
	\begin{verbatim}
	

			row = block[k-r];
	\end{verbatim}
	
	\begin{verbatim}
	

		  }
	\end{verbatim}
	
	\begin{verbatim}
	

		  if (IHaveColumn) {
	\end{verbatim}
	
	\begin{verbatim}
	

			col=new boolean[nr];
	\end{verbatim}
	
	\begin{verbatim}
	

			for (j=0;j<nr;j++) col[j]=block[j][k-c];
	\end{verbatim}
	
	\begin{verbatim}
	

			tbl.put(cols.at(k+r*N),col);
	\end{verbatim}
	
	\begin{verbatim}
	

		  }
	\end{verbatim}
	
	\begin{verbatim}
	

		  if (!IHaveRow) {
	\end{verbatim}
	
	\begin{verbatim}
	

			row = (boolean[])tbl.look(rows.at(k+c*N));
	\end{verbatim}
	
	\begin{verbatim}
	

		  }
	\end{verbatim}
	
	\begin{verbatim}
	

		  if (!IHaveColumn) {
	\end{verbatim}
	
	\begin{verbatim}
	

			col=(boolean[])tbl.look(cols.at(k+r*N));
	\end{verbatim}
	
	\begin{verbatim}
	

		  }
	\end{verbatim}
	
	\begin{verbatim}
	

		  for (i=0;i<nr;i++)
	\end{verbatim}
	
	\begin{verbatim}
	

		    if (col[i]) 
	\end{verbatim}
	
	\begin{verbatim}
	

		      for (j=0;j<nc;j++)
	\end{verbatim}
	
	\begin{verbatim}
	

			        block[i][j] |= row[j];
	\end{verbatim}
	
	\begin{verbatim}
	

		}//end for k
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

		for (i=0;i<nr;i++)
	\end{verbatim}
	
	\begin{verbatim}
	

		  for (j=0;j<nc;j++)
	\end{verbatim}
	
	\begin{verbatim}
	

		    a[r+i][c+j]=block[i][j];
	\end{verbatim}
	
	\begin{verbatim}
	

		done.signal();
	\end{verbatim}
	
	\begin{verbatim}
	

	  }catch (InterruptedException iex){}
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
\includegraphics{figures/stoq-1}

\includegraphics{figures/stoq-2}
As an example of this, we will implement Warshall's algorithm yet again.
\par
The overall structure of our implementation is shown in
\ref{id(41071)}[MISSING HREF]
. As you did with the other implementations, you first create a
\texttt{WarshallTQ}
object and then call its
\texttt{closure()}
method passing it the array to compute the transitive closure of. In this case, the parameter you pass to the constructor is not the number of threads to use. This implementation of Warshall's algorithm divides the array into rectangular blocks. Mostly, they will be square, but the last blocks along a side of the array can have unequal sides. The parameter passed,
\texttt{blkSize}
, is the length of the side of a square block, so the blocks will be
\includegraphics{figures/stoq-3}
squares where possible.
\par
Each block is given its own thread, so the number of threads created for an
\includegraphics{figures/stoq-4}
array will be
\includegraphics{figures/stoq-5}
.
\par
We have repeated the code for Warshall's algorithm in the box. The outer
\texttt{k}
loop iterates over the steps of the algorithm. The inner two,
\texttt{i}
and
\texttt{j}
, loops choose all elements of the array to be updated. Each
\texttt{A[i,j]}
has A[i][k] AND A[k][j] disjoined with it. A[i][k] is the element at row
\texttt{i}
in the
\texttt{k}
th column, and A[k][j] is the element in the
\texttt{j}
th column of the
\texttt{k}
th row. The flow of information to
\texttt{A[i,j]}
is shown in
\ref{id(27321)}[MISSING HREF]
.
\par
In versions of Warshall's algorithm shown before in the section entitled, ``
\ref{../d/Chores.xml#id(22447)}[MISSING HREF]
,'' a chore was responsible for processing an entire row. If the chore had the
\texttt{k}
th row on the
\texttt{k}
th step of the algorithm, it had to communicate that row to the other chores.
\par
Since we now divide the array into rectangular blocks and give them to threads, no thread has an entire row or an entire column that it can pass to other threads. In
\ref{id(93732)}[MISSING HREF]
, we draw in the edges of blocks. The element that is to be updated is in block D. Block D needs values of the
\texttt{k}
th row, stored in block B, and the
\texttt{k}
th column, stored in Block C. The block that holds a portion of the
\texttt{k}
th row during the
\texttt{k}
th step of the algorithm must pass it to all the other blocks in its column, and the block holding a portion of the
\texttt{k}
th column must pass it to all the blocks in its row.
\par
The way a block will pass a portion of a row or column to the other interested threads is by putting it in a shared table of queues with the key indicating the step of the algorithm,
\texttt{k}
, and the position of the block in its column or row.
\par

\subsection{closure}
The
\texttt{closure()}
method, shown in
\ref{id(47111)}[MISSING HREF]
, sets up the computation. It is passed the array to be processed. It proceeds as follows:
\par

\begin{itemize}

\item{It creates a shared table of queues to use and two IndexedKey objects to be used in naming the data being passed.}

\item{It calculates the number of threads nt it will create and creates an Accumulator object for those threads to signal their completion in.}

\item{It loops to create Block threads to do the actual processing. Each Block is told the original array to process and the indices of the upper left corner element in its block.}

\item{It awaits completion of the threads.}

\end{itemize}

\subsection{Block}
The structure of the
\texttt{Block}
class is shown in
\ref{id(58206)}[MISSING HREF]
. The constructor copies its parameters into the corresponding fields of the instance. The fields have the following meanings:
\par

\begin{itemize}

\item{a is the entire array passed to closure .}

\item{block is an array containing the block this thread is to work on. The run() method copies block from a portion of a at the beginning of its execution and copies the updated block back at the end.}

\item{r and c are the indices of the upper left corner element of the block.}

\item{N is the number of rows in the array a . It is used the upper bound for the loop incrementing k ; that is, N is the number of steps in the algorithm.}

\item{nr and nc are the number of rows and columns, respectively, in block.}

\item{tbl is the shared table of queues used for communication.}

\item{rows and cols are the IndexedKey objects used to construct the keys used to insert and lookup the portions of rows and columns being communicated.}

\end{itemize}

\texttt{Block}
's
\texttt{run()}
method is shown in
\ref{id(59432)}[MISSING HREF]
. It begins by calculating the number of rows and columns in its block;
\texttt{blkSize}
is used unless fewer rows or columns than that remain. Then it creates its
\texttt{block}
array and copies a portion of array
\texttt{a}
into it.
\par
The main loop of the algorithm cycles through all
\texttt{k}
steps. Index
\texttt{k}
is an actual row and column index in the full array. However, indices
\texttt{i}
and
\texttt{j}
are indices into the local array
\texttt{block}
.
\par
The loop first figures out whether this block contains a portion of the
\texttt{k}
th row or
\texttt{k}
th column. If so, it makes them available by placing them in the shared table of queues. We'll look at the computation of the table index later.
\par
If the block doesn't contain a section of the
\texttt{k}
th row, it reads it from the table, waiting for the row to be placed there by the block that has it. Similarly, if the block doesn't contain the
\texttt{k}
th column, it looks it up in the table. Both rows and columns are stored as boolean arrays.
\par
To understand the construction of indices, consider the line
\par

	\begin{verbatim}
	

	row = (boolean[])tbl.look(rows.at(k+c*N));
	\end{verbatim}
	We look up the
\texttt{k}
th row for our column by looking in the shared table of queues at an index we construct. We start off with
\texttt{IndexedKey}

\texttt{rows}
. The
\texttt{id}
field of
\texttt{rows}
will be in all the keys that name rows. We substitute a computed value for the
\texttt{x}
field of
\texttt{rows}
. This
\texttt{x}
field must indicate the step of the algorithm
\texttt{k}
and which column this piece of row
\texttt{k}
is in. If two blocks are in the same column, their upper left elements have the same column index. We use the value of
\texttt{c}
. Since we multiply
\texttt{c}
by
\texttt{N}
and
\texttt{k}
will always be less than
\texttt{N}
, we will have unique names for each portion of a row placed in the table.
\par

\subsection{Performance}
The performance of
\texttt{WarshallTQ}
is shown in
\ref{id(91977)}[MISSING HREF]
. Unlike the other versions of Warshall's algorithm, the series does not directly correspond to the number of threads used. Rather, the series corresponds to the size of the blocks the array is divided into. The number of threads will be the number of blocks,
\includegraphics{figures/stoq-6}
, where
\emph{N}
is the number of rows and columns in the array and
\emph{B}
is the number of rows and columns in a block. The ceiling operator is used, of course, because we allocate threads to any remaining partial blocks at the ends of rows and columns. The number of threads thus varies with the array size along the series for any given block size.
\par

\includegraphics{figures/stoq-7}
The series with block size
\includegraphics{figures/stoq-8}
allocates only one thread for all the array sizes shown here. The
\includegraphics{figures/stoq-9}
block size allocates from four to 144 threads.
\par
In this case, a
\includegraphics{figures/stoq-10}
block size worked best. It provided parallelism without flooding the program with threads.
\par
Having introduced all the versions of Warshall's algorithm, we are now in a position to compare the best series of each. These results are shown in
\ref{id(17228)}[MISSING HREF]
. The winners are
\texttt{Warshall1}
with 2 threads and
\texttt{WarshallTQ}
with a
\includegraphics{figures/stoq-11}
block size.
\par

\includegraphics{figures/stoq-12}

\section{Future Queues}
The queues in a
\texttt{SharedTableOfQueues}
object are instances of the
\texttt{FutureQueue}
class. Although designed to facilitate the implementation of
\texttt{SharedTableOfQueues}
,
\texttt{FutureQueue}
objects have uses of their own.
\par

	%\begin{tabular}
	
\textbf{\texttt{FutureQueue}}
FutureQueue
	\begin{verbatim}
	


FutureQueue()

	\end{verbatim}
	
	\begin{verbatim}
	


FutureQueue(FutureFactory f)

	\end{verbatim}
	
	\begin{verbatim}
	


FutureQueue(RunQueue r)

	\end{verbatim}
	
	\begin{verbatim}
	


Future get()

	\end{verbatim}
	
	\begin{verbatim}
	


Object getSkip()

	\end{verbatim}
	
	\begin{verbatim}
	


boolean isEmpty() 

	\end{verbatim}
	
	\begin{verbatim}
	


boolean isVacant() 

	\end{verbatim}
	
	\begin{verbatim}
	


Future look() 

	\end{verbatim}
	
\texttt{Object lookSkip()}

\par

	\begin{verbatim}
	


void put(Object obj) 

	\end{verbatim}
	
	\begin{verbatim}
	


void runDelayed(Runnable r)

	\end{verbatim}
	
	%\end{tabular}
	The idea behind future queues is this: In a normal FIFO queue, items are placed in the queue by calls to
\texttt{put()}
and removed from the queue by calls to
\texttt{get()}
. If a thread tries to get an item from the queue and there are no items present, the thread is delayed until there is an item to return to it. In a
\texttt{FutureQueue}
object,
\texttt{get()}
immediately returns a
\texttt{Future}
object. The futures are filled in with the items put in the queue as they become available. If a thread calls
\texttt{get().getValue()}
, it will wait until the time an item is enqueued, the same as it would if it tried to get a value out of a normal FIFO queue.
\par
So why not use a normal queue? There are a couple of reasons for this. First, a future queue provides FIFO service whereas Java schedulers do not. Second, a future queue allows a thread to put in a reservation without blocking, which may be useful.
\par

\subsubsection{Methods}
A
\texttt{FutureQueue}
object uses a
\texttt{FutureFactory}
object to generate the futures it uses. You can create a future queue specifying the future factory or a run queue to use in creating its own future factory. Or you can just let it use the
\texttt{Future}
class's run queue. If you are not using
\texttt{runDelayed()}
, you might as well use the default.
\par
A
\texttt{FutureQueue}
is a first-in first-out queue. Items are added to the queue by its
\texttt{put()}
method. Items are removed and returned by
\texttt{get}
(), but
\texttt{get()}
has two differences from conventional FIFO queues:
\par

\begin{enumerate}

\item{Method get() immediately returns a future that contains the item removed from the queue rather than the item itself.}

\item{The future returned by get() may have its value filled in later, when the item is enqueued.}

\end{enumerate}
So, it is not the thread calling
\texttt{get}
() that is enqueued to wait for an item to return, but the future returned to that thread that waits.
\par
It may help to understand a bit of the internal states of a
\texttt{FutureQueue}
object.
\par
A
\texttt{FutureQueue}
object is essentially a pair of queues. One contains objects (and null references), and the other contains futures. At most, one of those queues will have something in it at a time. When you get an item out of the future queue, you will be given a future that the next item in the queue of objects will be placed in.
\par
If you try to get an item out of the queue and there are items already waiting, you will be given the first in a future. If you try to get an item and there are no items there, you will be given a future, and the future will be placed in the queue of futures.
\par
When you put an item into the future queue, the
\texttt{put()}
method checks whether there are futures enqueued. If there are, the first future is removed, and the item is placed in it. If there are no futures waiting, the item is enqueued in the normal way.
\par
The following explains this in more detail:
\par

\subsection{get()}
The
\texttt{get()}
method returns a future. If there are items enqueued, the first enqueued item is removed and placed in the future. If there are no items enqueued, the future will be placed FIFO on the queue of futures.
\subsection{put()}
The
\texttt{put(obj)}
method puts the value of
\texttt{obj}
into the future queue. If there are futures waiting in the queue of futures, the first one is removed, and
\texttt{obj}
is placed in it. That future has already been given to some thread that executed
\texttt{get()}
or
\texttt{look()}
. If, on the other hand, there are no futures enqueued,
\texttt{obj}
is placed in a FIFO queue to wait for a
\texttt{get()}
.
\subsection{isEmpty()}
Method
\texttt{isEmpty()}
returns true if there are no objects or nulls queued up. It would still return true if there are futures queued up waiting for puts.
\subsection{isVacant()}
Method
\texttt{isVacant()}
returns true if
\texttt{isEmpty()}
returns true, there are no futures enqueued, and there are no runnables rundelayed on the queue. Method
\texttt{isVacant()}
is used by the shared table of queues to determine that the future queue can be removed from the table. The future queue is vacant if it is in the state it would initially be created in, so if it is detected to be vacant and is removed from the table, and is ever referred to again, it will be created in exactly the same state it currently has.
\subsection{getSkip()}
Method
\texttt{getSkip()}
returns the first object from the queue of objects, not a future. It removes the object it returns. If
\texttt{isEmpty()}
would return true,
\texttt{getSkip()}
immediately returns null, which means that you should not enqueue nulls if you intend to use
\texttt{getSkip()}
--you won't be able to distinguish the value null from the empty indication.
\subsection{look()}
Method
\texttt{look()}
returns a future. It's implementation is a bit ticklish. If there are objects enqueued, it returns the first object in the object queue in the future, without of course removing the object. If the queue of objects is empty, then the future must be saved for subsequent
\texttt{look()}
calls to return and for a
\texttt{put()}
to place a value in.
\subsection{lookSkip()}
Method
\texttt{lookSkip()}
will return the first object in the object queue without removing it if that queue isn't empty. If
\texttt{isEmpty()}
would return true,
\texttt{lookSkip()}
immediately returns null. This can, of course, cause trouble if you wish to enqueue nulls.
\subsection{runDelayed()}
Method
\texttt{runDelayed(r)}
places
\texttt{r}
in a run queue if and when the future queue isn't empty. It is equivalent to
\texttt{fq.look().runDelayed(r)}
.
\subsubsection{Implementation of FutureQueue}
The implementation of
\texttt{FutureQueue}
is intricate. The goal is to have it behave the same as a normal FIFO queue, except that
\texttt{get()}
and
\texttt{look()}
return immediately, and they return values in futures, rather than directly.
\par

\subsection{Desired behavior}
It will help to consider two cases.
\par
Suppose you have three puts
\par

\texttt{theQueue.put(``a''); theQueue.put(``b''); theQueue.put(``c'');}

\par
and three gets
\par

\texttt{x=theQueue.get(); y=theQueue.get(); z=theQueue.get();}

\par
It does not matter how the puts and gets are interspersed. At the end, future
\texttt{x}
will have a future containing string
\texttt{``a''}
;
\texttt{y}
,
\texttt{``b''}
; and
\texttt{z}
,
\texttt{``c''}
. All the puts could be done first, last, or interspersed among the gets.
\par
Secondly, calls to
\texttt{look()}
and calls to
\texttt{get()}
form groups that we will call look-get groups. If there are several
\texttt{look}
s in succession, they will return futures containing the same value. A
\texttt{get}
following those
\texttt{look}
s will also return that value, but after a
\texttt{get}
,
\texttt{look}
s will return a different value. This is clear if the queue already has an item in it. The
\texttt{look}
s return futures containing the first item in the queue without removing it. A
\texttt{get}
also returns it, but
\texttt{get}
removes it. We want
\texttt{look}
s and
\texttt{get}
s to have exactly the same behavior, even if the items haven't been enqueued yet.
\par
We will define a look-get group as the longest string of zero or more
\texttt{look()}
calls terminated by a
\texttt{get()}
call. These look-get groups may have other calls (e.g.
\texttt{put()}
) interspersed among them. Remember also that a look-get group may contain no
\texttt{look}
s and one
\texttt{get}
. All the operations in the look-get group return a future that will contain the same value.
\par

\ref{id(21319)}[MISSING HREF]
shows three sequences of code that will fill array
\texttt{f}
with the same values.
\par

	%\begin{tabular}
	Equivalent series of looks, gets, and puts.
	\begin{verbatim}
	

int i=0;
	\end{verbatim}
	
	\begin{verbatim}
	

q.put("a"); q.put("b");
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

i=0;
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

q.put("a");
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

q.put("b");
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	\begin{verbatim}
	

i=0;
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.look();
	\end{verbatim}
	
	\begin{verbatim}
	

f[i++]=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

q.put("a");
	\end{verbatim}
	
	\begin{verbatim}
	

q.put("b");
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
\subsection{States of the future queue}
A
\texttt{FutureQueue}
object has three major components:
\par

\begin{enumerate}

\item{q : a queue of items (objects and null values),}

\item{qf : a queue of futures}

\item{lf : a ``look future,'' which may be null.}

\end{enumerate}
The future queue may be modeled as an infinite-state machine, since the internal queues are theoretically unbounded. The states of the future queue will change as methods are called. The calls may be considered inputs to the machine. There are two particular rules that the states of the future queue will abide by:
\par

\begin{enumerate}

\item{Within a look-get group, lf will not be null. It will have a reference to the future that all the looks and the get in that group will return. Outside of a look-get group, lf will be null.}

\item{The length of q minus the length of qf will equal the number of put s minus the number of look-get groups that have begun. A look-get group of course begins with its initial look , if any, and ends with its terminal get .}

\end{enumerate}
Fourteen of the states of a future queue are shown in
\ref{id(98874)}[MISSING HREF]
along with state transitions. The three major components of the state are written as
\emph{(}

\texttt{q}

\emph{,}

\texttt{qf}

\emph{,}

\texttt{lf}

\emph{)}
. At most, one of the queues,
\texttt{q}
or
\texttt{qf}
, can be nonempty at a time. When a queue is empty, we write it
\emph{[ ]}
. When it is nonempty, we write it
\emph{[a,...]}
or
\emph{[f,...]}
. The state of
\texttt{lf}
is either
\emph{null}
or
\emph{F}
, indicating that it contains a future.
\par

\includegraphics{figures/stoq-13}
We will discuss the behavior of
\texttt{get()}
and
\texttt{put()}
alone first, and then their behavior in the presence of
\texttt{look()}
calls.
\par
Let's first consider the usual queue transitions, shown in the lower left of the figure. The queue of items can be empty or nonempty. The queue of futures is empty. A
\texttt{put()}
will put an item into the queue of items. A
\texttt{get()}
will remove and return an item. After the
\texttt{put}
, the length of
\texttt{q}
will be one greater than before. After a
\texttt{get()}
, it will be one less. Calling
\texttt{put()}
moves us down the column. Calling
\texttt{get()}
moves us up.
\par
Now suppose
\texttt{get}
s run ahead of
\texttt{put}
s. Suppose both queues are empty and
\texttt{get()}
is called. The
\texttt{get()}
method creates a future and places it in the queue of futures and returns the future immediately. Now
\texttt{qf}
is nonempty, and
\texttt{q}
is empty. More calls to
\texttt{get()}
will lengthen the queue of futures.
\par
When the queue of futures
\texttt{qf}
is nonempty, a
\texttt{put()}
will remove the first future from the queue and place the item in it. The queue of futures will shrink by one. Again, calling
\texttt{put()}
moves us down the column. Calling
\texttt{get()}
moves us up.
\par
These are all the transitions on the left side of the figure.
\par
Now consider what happens when you call
\texttt{look()}
. Suppose the call is the first look in a look-get group and the queue of items
\texttt{q}
is empty. When look is called,
\texttt{lf}
is null. The
\texttt{look()}
performs the first action of
\texttt{get()}
by placing a future in the
\texttt{qf}
. It also saves a reference to that future in
\texttt{lf}
for subsequent
\texttt{look()}
calls and the final
\texttt{get()}
of the look-get group to use. Those two actions take us up (adding to the future queue) and right (setting
\texttt{lf}
).
\par
It's the same when the initial
\texttt{look()}
of a look-get group is called when the item queue
\texttt{q}
is nonempty. The
\texttt{look()}
will create a future and assign it to
\texttt{lf}
, moving right. Then it will remove the first item from
\texttt{q}
and assign it to the future in
\texttt{lf}
, moving up.
\par
If a
\texttt{look()}
is called in the midst of a look-get group, it simply returns the future in
\texttt{lf}
. These are the self-loops on the right side.
\par
When the terminal
\texttt{get()}
is called in a look-get group that has at least one look in it; that is, when get is called from one of the states on the right-hand side, it returns the future in
\texttt{lf}
and then clears
\texttt{lf}
to null. It does not remove an item from
\texttt{q}
nor enqueue a future in
\texttt{qf}
, since that has already been done by the initial
\texttt{look()}
in the group. This moves straight right to left across the figure.
\par
A call to
\texttt{put()}
on the right side behaves the same as on the left side. If there are waiting futures in
\texttt{qf}
, it removes the first and places the item in it. If there are no waiting futures, it places the item in
\texttt{q}
. It moves down the right side.
\par
The overall structure of
\texttt{FutureQueue}
is shown in
\ref{id(56293)}[MISSING HREF]
. The
\texttt{QueueComponent}
objects are simple queues that have methods
\texttt{get()}
,
\texttt{put()}
, and
\texttt{isEmpty()}
. They are used as components of several other thread package classes. They are not synchronized, since they are only used within other objects that do synchronize access.
\par

	%\begin{tabular}
	Overall structure of FutureQueue
	\begin{verbatim}
	

public class FutureQueue {
	\end{verbatim}
	
	\begin{verbatim}
	

QueueComponent q=new QueueComponent();
	\end{verbatim}
	
	\begin{verbatim}
	

QueueComponent qf=new QueueComponent();
	\end{verbatim}
	
	\begin{verbatim}
	

Future lf=null;
	\end{verbatim}
	
	\begin{verbatim}
	

FutureFactory ff=null;
	\end{verbatim}
	
	\begin{verbatim}
	

public FutureQueue(){...}
	\end{verbatim}
	
	\begin{verbatim}
	

public FutureQueue(RunQueue r){...}
	\end{verbatim}
	
	\begin{verbatim}
	

public FutureQueue(FutureFactory f){ff=f;}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized void put(Object obj) {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized boolean isVacant() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized boolean isEmpty() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized Future get() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized Future look() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public void runDelayed(Runnable r){...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized Object getSkip() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

public synchronized Object lookSkip() {...}
	\end{verbatim}
	
	\begin{verbatim}
	

}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	Code for the
\texttt{put()}
method is shown in
\ref{id(20273)}[MISSING HREF]
. It will dequeue and assign a value to a waiting future if there is one or enqueue the value if there is none.
\par

	%\begin{tabular}
	Method put() of FutureQueue
	\begin{verbatim}
	

 	public synchronized void put(Object obj) {
	\end{verbatim}
	
	\begin{verbatim}
	

		Future f; 
	\end{verbatim}
	
	\begin{verbatim}
	

		if (!qf.isEmpty()){
	\end{verbatim}
	
	\begin{verbatim}
	

			f=(Future)qf.get();
	\end{verbatim}
	
	\begin{verbatim}
	

			f.setValue(obj);
	\end{verbatim}
	
	\begin{verbatim}
	

		} else {
	\end{verbatim}
	
	\begin{verbatim}
	

			q.put(obj);
	\end{verbatim}
	
	\begin{verbatim}
	

		}
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	Method
\texttt{get()}
has two cases to consider. First, if
\texttt{lf}
is not null, then this is the end of a series of one or more
\texttt{look}
s in a look-get group. In this case, the
\texttt{get()}
merely needs to return the same future the
\texttt{look}
s returned (i.e., the contents of
\texttt{lf)}
and to set
\texttt{lf}
to null so later looks will find a different value. If there weren't any preceding looks, then this
\texttt{get()}
is both the beginning and the end of a look-get group. It must either remove an item from the queue of items
\texttt{q}
and return it in a future, or it must enqueue a future in
\texttt{qf}
if
\texttt{q}
is empty.
\par

	%\begin{tabular}
	Method get() of FutureQueue
	\begin{verbatim}
	

	public synchronized Future get() {
	\end{verbatim}
	
	\begin{verbatim}
	

		Object obj;
	\end{verbatim}
	
	\begin{verbatim}
	

		Future f;
	\end{verbatim}
	
	\begin{verbatim}
	

		if (lf!=null) {
	\end{verbatim}
	
	\begin{verbatim}
	

			f=lf;
	\end{verbatim}
	
	\begin{verbatim}
	

			lf=null;
	\end{verbatim}
	
	\begin{verbatim}
	

			return f;
	\end{verbatim}
	
	\begin{verbatim}
	

		}
	\end{verbatim}
	
	\begin{verbatim}
	

		if (!q.isEmpty()) {
	\end{verbatim}
	
	\begin{verbatim}
	

			obj = q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

			lf=null;
	\end{verbatim}
	
	\begin{verbatim}
	

			return ff.make(obj);
	\end{verbatim}
	
	\begin{verbatim}
	

		}
	\end{verbatim}
	
	\begin{verbatim}
	

		f=ff.make();
	\end{verbatim}
	
	\begin{verbatim}
	

		qf.put(f);
	\end{verbatim}
	
	\begin{verbatim}
	

		return f;
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	The
\texttt{look()}
method's behavior is trivial if it is not the first look in a look-get group. It merely returns the future referenced by
\texttt{lf}
. If it is the first
\texttt{look}
in the group, it has to create the future used by the group. Then, it must either get the first item from
\texttt{q}
and put it in the future, or it must enqueue the future in
\texttt{qf}
, depending on whether there is anything in
\texttt{q}
to get.
\par

	%\begin{tabular}
	Method look() of FutureQueue
	\begin{verbatim}
	

	public synchronized Future look() {
	\end{verbatim}
	
	\begin{verbatim}
	

		Object obj;
	\end{verbatim}
	
	\begin{verbatim}
	

		if (lf!=null) return lf;
	\end{verbatim}
	
	\begin{verbatim}
	

		lf = ff.make();
	\end{verbatim}
	
	\begin{verbatim}
	

		if (!q.isEmpty()) {
	\end{verbatim}
	
	\begin{verbatim}
	

			obj=q.get();
	\end{verbatim}
	
	\begin{verbatim}
	

			lf.setValue(obj);
	\end{verbatim}
	
	\begin{verbatim}
	

		} else {
	\end{verbatim}
	
	\begin{verbatim}
	

			qf.put(lf);
	\end{verbatim}
	
	\begin{verbatim}
	

		}
	\end{verbatim}
	
	\begin{verbatim}
	

		return lf;
	\end{verbatim}
	
	\begin{verbatim}
	

	}
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	
\subsubsection{Example of FutureQueue : The Queued Readers-Writers Monitor}
The
\texttt{QueuedReadersWritersMonitor}
is implemented using the
\texttt{FutureQueue}
class. As each reader or writer arrives, it takes the next future from a future queue and waits for a value to be assigned to that future, giving it permission to read or to attempt to write.
\par

\subsection{State of the monitor.}
The state of this monitor is contained in two variables:
\par

\begin{verbatim}
nr : the number of threads currently reading ( >=0).
\end{verbatim}

\begin{verbatim}
fq : the FutureQueue object that queues up the threads awaiting permission to read or write.
\end{verbatim}

	%\begin{tabular}
	Method startReading() of QueuedReadersWritersMonitor
	\begin{verbatim}
	

       public void startReading() 
	\end{verbatim}
	
	\begin{verbatim}
	

         throws InterruptedException{
	\end{verbatim}
	
	\begin{verbatim}
	

            Future f=fq.get();
	\end{verbatim}
	
	\begin{verbatim}
	

            f.getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

            synchronized (this) {nr++;}
	\end{verbatim}
	
	\begin{verbatim}
	

            fq.put(null);
	\end{verbatim}
	
	\begin{verbatim}
	

     }
	\end{verbatim}
	
	%\end{tabular}
	
\subsection{startReading() .}
If a thread tries to start reading, it takes the token from
\texttt{fq}
. This token allows it to start reading. It immediately places the token back in the queue, allowing the next thread, if present and if a reader, to start reading too. The readers also increment the count of the number of readers present. If a writer gets the token after a series of readers, it will wait until all the readers are done before proceeding.
\par

	%\begin{tabular}
	Method startWriting() of QueuedReadersWritersMonitor
	\begin{verbatim}
	

      public void startWriting() 
	\end{verbatim}
	
	\begin{verbatim}
	

         throws InterruptedException{
	\end{verbatim}
	
	\begin{verbatim}
	

            Future f=fq.get();
	\end{verbatim}
	
	\begin{verbatim}
	

            f.getValue();
	\end{verbatim}
	
	\begin{verbatim}
	

            synchronized (this) {while(nr>0) wait();}
	\end{verbatim}
	
	\begin{verbatim}
	

     }
	\end{verbatim}
	
	%\end{tabular}
	
\subsection{startWriting() .}
To start writing, a thread not only must get the token, but must wait for the number of readers to go to zero. A writer does not immediately place the token back in the queue because no subsequent reader or writer may proceed until it is done. It waits until it stops writing to place the token into
\texttt{fq}
.
\par

	%\begin{tabular}
	Method stopReading() of QueuedReadersWritersMonitor
	\begin{verbatim}
	

      public synchronized void stopReading(){
	\end{verbatim}
	
	\begin{verbatim}
	

            synchronized(this)nr--;
	\end{verbatim}
	
	\begin{verbatim}
	

            if (nr==0) notify();
	\end{verbatim}
	
	\begin{verbatim}
	

     }
	\end{verbatim}
	
	%\end{tabular}
	
\subsection{stopReading() .}
When a reader finishes reading, it decrements the count of readers. There may be a writer waiting for the count to become zero, so the last reader to finish calls
\texttt{notify()}
to allow the writer (if there is any) to proceed.
\par

	%\begin{tabular}
	Method stopWriting() of QueuedReadersWritersMonitor
	\begin{verbatim}
	

     public synchronized void stopWriting(){
	\end{verbatim}
	
	\begin{verbatim}
	

            fq.put(null);
	\end{verbatim}
	
	\begin{verbatim}
	

     }
	\end{verbatim}
	
	%\end{tabular}
	
\subsection{stopWriting() .}
When a writer finishes writing, it puts the token (null) into the future queue, allowing the next reader or writer to proceed.
\par

\section{Future Tables}
If all you want to use from the shared table of queues is futures, a simpler data structure to use is
\texttt{FutureTable}
. A future table contains futures that you can look up with any object as key. The futures are created in the table when they are first looked up. For dataflow purposes, either the producer or the consumer can look up a variable's future first--it doesn't matter, since it is created on first access.
\par

	%\begin{tabular}
	
\textbf{\texttt{FutureTable}}
FutureTable
	\begin{verbatim}
	


FutureTable()
: This is the constructor. It creates an empty 
FutureTable
 object.
	\end{verbatim}
	
	\begin{verbatim}
	


Future get(Object key)
: This method looks up the future in the table associated with the key. It creates a future if it is not already there.
	\end{verbatim}
	
	\begin{verbatim}
	


void remove(Object key)
: This method removes the future from the table associated with key, if there is any. 
	\end{verbatim}
	
	\begin{verbatim}
	


	\end{verbatim}
	
	%\end{tabular}
	Futures can be removed from the table when they are no longer needed. When encoding dataflow, this is dangerous, since it's often difficult to know when the last access has been made. It's probably easier and safer to create a new future table for each scope and to simply discard it when it's no longer needed.
\par

\section{Chapter Wrap-up}
There are two major data structures discussed in this chapter:
\texttt{SharedTableOfQueues}
and
\texttt{FutureQueue}
.
\par
The
\texttt{SharedTableOfQueues}
class is a versatile facility for synchronization and communication. We saw how to implement a variety of parallel programming patterns using it: futures, locks, semaphores, queues, locked records, bounded buffers, barriers,
\emph{I}
-structures, and reactive objects.
\par
The class
\texttt{IndexedKey}
can be used to create a collection of related keys, which helps in coding some of the
\texttt{SharedTableOfQueues}
patterns.
\par
Class
\texttt{FutureQueue}
is used in the implementation of
\texttt{SharedTableOfQueues}
. In turn,
\texttt{FutureQueue}
's implementation presents a number of problems. We examined it in detail as an example of advanced thread programming.
\par
Future queues themselves can be used to get strictly first-in/first-out queueing of threads. As an example, we examined the
\texttt{QueuedReadersWritersMonitor}
class, a more efficient implementation of the same scheduling policy as the
\texttt{TakeANumberMonitor.}
(See ``
\ref{../a/Monitors.xml#id(36307)}[MISSING HREF]
'' in Chapter 4).
\par
If, as is often the case, only a table of named futures is required,
\texttt{FutureTable}
provides a more efficient implementation.
\par

\section{Exercises}
1. Implement a semaphore with first-come, first-served queueing using a
\texttt{FutureQueue}
object.
\par
2. Implement a bounded buffer using
\texttt{FutureQueue}
objects.
\par
3. Implement a modified version of
\texttt{WarshallTQ}
that uses
\texttt{FutureTable}
rather than
\texttt{SharedTableOfQueues}
for communicating the sections of the rows and columns. Does the simpler, presumably more efficient, data structure make a significant difference in performance?
\par

\par
