%\documentclass[twocolumn,10pt,letterpaper]{article}
\documentclass[a4paper,10pt]{article}
\usepackage{ieee6x9}

\usepackage{subfigure}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{url}
\usepackage[singlelinecheck=false,bf]{caption}

%\setlength{\pdfpagewidth}{8.5in}
%\setlength{\pdfpageheight}{11in}

%\setlength{\topmargin}{-0.4in}
%\setlength{\textheight}{8.6in}

\renewcommand{\topfraction}{.9}
\renewcommand{\bottomfraction}{.9}
\newcommand{\Command}[1]{{\bf \tt #1}}
\renewcommand{\textfraction}{.1}

\begin{document}

%-------------------------------------------------------------------------------
% Title
%-------------------------------------------------------------------------------

\title{\bf Spice Messaging Queue: A Tactical Distributed Message System for
Campus Grids}

\author{Peter Bui, Aaron Dingler}

\date{May, 5 2010}

\maketitle

%-------------------------------------------------------------------------------
% Abstract
%-------------------------------------------------------------------------------

\begin{abstract}

This paper describes Spice Messaging Queue (SMQ), a tactical distributed
message-queuing system.  The main goals of SMQ are reliable communication,
event-based computation, flexible message routing, and fault tolerance.  SMQ
provides users with a mailbox-like communication scheme coupled with active
storage mechanisms.  The performance, fault tolerance, and reliability of SMQ
are evaluated using three workloads: a simple message relay, an image
processor, and a data extractor.

\end{abstract}

%-------------------------------------------------------------------------------
% 1. Introduction
%-------------------------------------------------------------------------------

\section{Introduction}

Many applications follow a dataflow similar to that in Figure \ref{fig:flow}
where one program generates a large amount of data which is processed by a
series of subsequent programs.  Each data processor may be responsible for
archiving its input, output, or intermediate data either locally, via remote
storage, or via one or more remote nodes dedicated to this task.  Constructing
a program to execute a dataflow of this type can be a difficult programming
challenge, especially if the data processors and storage devices reside on many
different machines.  For large datasets, the problem is compounded by the need
for fault tolerance (e.g.,  what happens to data if a data processor crashes?),
and reliable communication (e.g., what if data is lost due to dropped network
connectivity?).  Furthermore, the act of adding, removing, or modifying the
number or responsibilities of the data processors and archivers could require
considerable extra development time.

In this paper we describe the Spice Messaging Queue (SMQ), a tactical
distributed message-queuing system that enables a user to quickly deploy and
execute programs with complex dataflows such as those in Figure \ref{fig:flow}.
Furthermore, SMQ provides the following key features:

\begin{figure}[h]
\begin{center}
%\includegraphics[width=\columnwidth]{flow.pdf}
\includegraphics[width=4.0in]{flow.pdf}

\caption{{\bf Example Message Queue Dataflow}.  In this example, a data
generator periodically creates and packages data messages that are passed to a
data processor.  This data processor in turn may relay the data to other data
processors or may archive the data to some storage system.}

\label{fig:flow}

\end{center}
\end{figure}

\begin{enumerate}

\item{\bf Reliable Communication}:

SMQ provides reliable communication; messages persist in the queue once
inserted by a data generator.  Therefore, if the data generator closes (due to
a crash, etc.) any messages in the system will be present if/when it restarts.
Processing can also be carried out by the data processors even if the data
generator is no longer present.  Furthermore, mechanisms exist in the system to
ensure that messages are delivered.

\item{\bf Flexible Routing}:

SMQ was designed with the ability to enable flexible routing of messages.  By
writing custom ``relay" and data processor scripts, users can specify how
messages will be routed in the system.  For example, in Figure \ref{fig:flow}
any of the data processors can write their output to storage, write a message
to one or more other data processors, etc.  Furthermore, the way users specify
destination queues is flexible, allowing for load balancing among multiple data
processors.

\item{\bf Fault Tolerance}:

In order for a distributed system to be successful, it ought to be able to
handle faults gracefully.  For example, faults should not cause messages to be
lost, so the remote procedure call (RPC) to send a message must allow resending
until it is successful.  In order to recover from crashes, the software
components in SMQ log transactions, replaying the log on a restart.

\end{enumerate}

The organization of the remainder of this paper is as follows: in Section
\ref{sec:design} we describe the architecture, organization, and naming scheme
used in SMQ.  Section \ref{sec:implementation} details the physical
implementation of SMQ.  In Section \ref{sec:eval} we present an evaluation of
the performance, reliability, and fault tolerance of SMQ.  We conclude in
Section \ref{sec:conclusion} and present possible future work.

%-------------------------------------------------------------------------------
% 2. Design
%-------------------------------------------------------------------------------

\section{Design}

\label{sec:design}

In this section we present the architecture, organization, and naming scheme
used in SMQ.  Further implementation details are described in Section
\ref{sec:implementation}.

\subsection{Architecture}

\label{ssec:arch}

The architecture of SMQ is organized as seen in Figure \ref{fig:architecture}.
A manager running on a machine is responsible for one or more message queues.
Each message queue consists of an exchange, and a set of messages and bindings.
An exchange is responsible for running bindings on a queue, as well as flushing
messages from a queue (when the {\tt flush} RPC is called).

\begin{figure}[h]
\begin{center}
%\includegraphics[width=\columnwidth]{architecture.pdf}
\includegraphics[width=5.0in]{architecture.pdf}

\caption{{\bf SMQ Architecture}.  In SMQ, a manager maintains a set of message
queues which consist of messages and bindings.  For each message queue, there
is an exchange that monitors the messages and executes the bindings.}

\label{fig:architecture}
\end{center}
\end{figure}

A message consists of a header, which contains meta-data about the message, and
a body, which is the contents of the message (text, an image, etc.  depending
on the application).  The header consists of name/value pairs that are used to
specify routing, time-to-live, etc.  There are two required name/value pairs in
each header: source, and target.  Other pairs may be added as desired for use
in custom routing scripts (either via a relay, or other bindings, as described
below).

A binding is a user-generated script that is attached to a message queue and
executed periodically by that queue's exchange.  Bindings can be written in any
programming language as long as they can be executed on the hosts on which the
exchange is running (testing outside SMQ before attaching to a queue is highly
recommended).  Section \ref{sec:eval} contains some examples of how bindings
can be used to perform interesting tasks within SMQ.

In addition to any user-defined queues, each manager contains a ``relay" queue.
This queue has a binding attached to it that watches for incoming messages and
routes them to other queues; in this way, users can send messages to a relay
queue on any manager and have their messages forwarded to their actual
destination.  Since the relay is itself a binding script, it can be customized
by the user to implement unique routing between queues.  The default relay
script provided with the SMQ source simply takes in messages and uses the {\tt
put} RPC to forward them to the queue specified by the ``target" name/value
pair in the message header.  The relay script also keeps a log of what {\tt
put}s have succeeded, so on a crash it will send messages that have yet to be
sent.  Logging transactions also ensures that messages are not sent multiple
times.

\subsection{Organization}

The organization of an SMQ system is presented in Figure
\ref{fig:organization}.  SMQ managers register their queues with the Chirp
catalog server.  If no queues are created by users, a relay queue will at least
be recorded for each manager.  Updates are sent periodically to the catalog
server to inform it about new queues, the hostname and port of the manager, the
number of messages in each queue, etc.  

Clients can communicate with the catalog server and managers via RPCs.  The
RPCs are invoked inside a client program via a library (discussed in Section
\ref{ssec:library}), or via command-line utilities that wrap the library
functions (discussed in Section \ref{ssec:utilities}).  Clients connect to the
catalog server in order to get queue information; connections can be made
explicitly by the user, or implicitly in the context of an RPC.  For example, a
client may use a utility to explicitly contact the catalog server to list all
available queues, and then use a utility to send a message to a specific queue
in that list.  When the latter utility is invoked, it implicitly contacts the
catalog server to ensure that the desired queue exists, and to determine the
location of the queue.

\begin{figure}
\begin{center}
%\includegraphics[width=\columnwidth]{organization.pdf}
\includegraphics[width=5.0in]{organization.pdf}

\caption{{\bf Organization}.  SMQ follows a semi-structured organization model.
This means that managers register their message queues with a central catalog
server who keeps track of all queues.  Clients contact the catalog server to
find a queue and then connect directly to the manager to perform RPC
operations.}

\label{fig:organization}
\end{center}
\end{figure}

\subsection{Naming}

Queue information is recorded using the Chirp catalog server; custom fields
(name/value pairs) were added to keep track of each SMQ queue (e.g. queue name,
type of queue, etc.).  

One of the goals of SMQ was to have a flexible naming scheme.  A queue is
identified by a name given when it is created.  In the case where there are
queues with the same name on different hosts (e.g. the relay queues), the
hostname is used to distinguish between them.  If the hostname is not
specified, then any available queue with that name will be chosen; a
command-line flag to each applicable utility lets the user specify if the
chosen queue should be a random queue, or the first that is found (starting
with the local queue if available).

As discussed in Section \ref{ssec:arch}, the relay script is a user-customizable script
that is bound to a specific ``relay queue" on each SMQ manager.  As such, users can specify
a relay on any manager and have their messages delivered based on the semantics
of the relay script (e.g. with the default provided script, messages are forwarded
based on the ``target" name/value pair).


%-------------------------------------------------------------------------------
% 3. Implementation
%-------------------------------------------------------------------------------

\section{Implementation}
\label{sec:implementation}

The Spice Messaging Queue code is implemented in the C programming language,
with the exception of the {\it relay} and {\it convert-to-png} bindings, which
are in Python and Ruby, respectively.  Here we describe how the various
components of SMQ are implemented.

\subsection{SMQ Manager}

As discussed in Section \ref{ssec:arch} and shown in Figure
\ref{fig:architecture}, an SMQ manager is composed of one or more message queues
(one of each is the relay queue).  Internally, each message queue is simply a
directory of messages and contains a set of bindings which are stored in a
special directory as executables.  To uniquely identify each message, a
timestamp string is used to name each message in a queue.

To handle RPC operations, the SMQ manager runs in a loop waiting for incoming
connections from clients.  When a client connects to the manager, the manager forks a
process to handle the client connection.  This new forked child process will
wait for the text-based RPC from the client, call necessary server-side
functions for each RPC -- these are defined in {\tt smq\_message\_queue.c} --
and respond as necessary.  Any failed RPC operations or idle timeouts will
cause the child process to close the connection and exit.

The manager also sets up exchanges as necessary.  Every so often, the manager
will execute a function that iterates through all its message queues and
determines whether an exchange is running for each queue.  If any queue has
been created but does not have an exchange, the manager will start one.  The
manager is also responsible for setting up the relay queue, so it will create
an exchange for the relay the first time this loop is entered and setup the
relay binding as appropriate.

Each exchange is responsible for executing all bindings associated with the
queue to which it is attached, and also for unlinking messages marked to be
flushed.  The exchange runs in a loop that {\bf a)} checks to see if a message
is marked to be flushed, and if so unlinks the header and body, and {\bf b)}
iterates through each message in the queue and executes any bindings associated
with the queue.  The exchange then sleeps for a time specified when the
exchange is created, and repeats this process.  

In order to recover from crashes, the exchange writes to a log when new
bindings are created and messages are flushed.  This log is replayed whenever
an exchange is started and stored in memory.  After each successful {\tt bind} or
{\tt flush} operation, a new log entry is appended to the exchange log file.

\begin{table*}[]
  \centering
  {\small
    \begin{tabular}{|c|c|c|c|c|}
      \hline
      Name & Arguments & Result \\
      \hline
      {\tt put} & queue header\_length body\_length & Message {\it header+body} is put in {\it queue}\\
      \hline
      {\tt get} & queue\_name message\_id & Message {\it message\_id} is fetched from {\it queue}\\
      \hline
      {\tt bind} & queue script\_name script\_length & Bind {\it script} to {\it queue}\\
      \hline
      {\tt bindings} & queue & List bindings on {\it queue}\\
      \hline
      {\tt create} & queue & Create {\it queue}\\
      \hline
      {\tt list} & queue & List messages in {\it queue}\\
      \hline
      {\tt flush} & queue & Flush (unlink) messages from {\it queue}\\
      \hline
    \end{tabular}
  }
  \caption{
    {\bf SMQ RPC Operations}. List of all client-side SMQ remote procedure calls.  Note
    that each RPC also takes a {\tt link} and timeout as arguments (these are omitted for
    the sake of space).
  }
  \label{tab:util}
\end{table*}

\subsection{SMQ Library}
\label{ssec:library}

The RPCs in SMQ are implemented as a client-side library that is available to
developers.  Table \ref{tab:util} lists all RPCs, their arguments, and the
result of calling the RPC.  In all cases, the RPC command is sent as a simple
line of text to the connected manager using the {\tt link} structure -- each
RPC library function therefore takes a {\tt link} as an argument.  Once the RPC
is sent, the manager acts on it and data can be passed through the {\tt link}.  For
example, on a {\tt get} RPC, the client writes ``get {\it message\_id}'' to the
manager, which then sends the length and name of the header followed by the
header itself, and then the length of the body and the body itself.  To signal
the return status of an RPC operation, the SMQ manager will write to the
client's {\tt link} a $1$ response for success and $0$ for failure.

\subsection{SMQ Utilities}
\label{ssec:utilities}

Each RPC listed in Table \ref{tab:util} is wrapped as a command-line utility
written in the C programming language for use by developers and end-users.
Each utility takes command-line arguments necessary for each RPC, as well as
some useful flags.  When the name of a queue is specified at the command-line,
it may be prefixed with the hostname on which the queue resides.  In this
case, a queue of that name must exist on the manager at this hostname, or the
RPC will fail.  If no hostname is specified, the utility will by default choose
any manager (starting with the local manager) that has a queue with that name.
If the ``-r'' command-line option is set, managers are chosen at random.  Each
utility queries the catalog server to ensure that the specified queue exists
and/or to choose the queue if no hostname is specified, and also to determine
the address and port of the manager to pass to the necessary RPC library
function.

Each utility also has a ``-d {\it flag}'' option to specify debugging output to be
printed as the utility executes; the default is no debugging output.  A ``-t
{\it flag}'' option is used to specify a timeout for the RPC if the default
timeout specified in the utility is undesirable.

A utility called {\tt smq\_status} is provided that does not correspond to any
RPC.  This utility queries the catalog server to get status information for all
queues currently registered.  If the ``-l'' command-line flag is used, all of
the information about the message queues is output to the screen as a series of
name/value pairs.  By default, the utility outputs the name of the queue, the
hostname, the port, the version number, and the number of messages in the queue
in table format.

%-------------------------------------------------------------------------------
% 4. Evaluation
%-------------------------------------------------------------------------------

\section{Evaluation}
\label{sec:eval}
In this section we present an evaluation of SMQ using three distinct workloads.  
The main goals of SMQ are correctness and fault tolerance, so we focus mainly on 
these aspects rather than on the performance of the system for each workload.

\subsection{Simple Relay}

As a first example, we considered the case of a simple message relay.  In this
example, the user starts a manager on the local host (or it could use a manager
on a remote host), and begins to queue up messages to be sent to a remote
queue.  This target queue, call it {\it Q}, has not been created yet, so when
the relay script attempts to {\tt put} a message to {\it Q}, the RPC will fail.
As such, the relay's log will reflect that these messages were not successfully
{\tt put} to the remote queue.  When {\it Q} is eventually created, the relay
script (which is periodically executed by the exchange) will begin to {\tt put}
messages successfully and record each {\tt put} transaction in its log. If {\it
Q} is killed before all messages are successfully sent, the {\tt put} RPCs will
again start to fail until the queue is brought back up. At this point, all
remaining messages will be sent to {\it Q}.  

We were able to setup this experiment and confirm that messages are sent
reliably using our relay script.  During the testing, we killed and restarted
{\it Q} and checked the status of the queue using the {\tt smq\_status}
utility.  Eventually, our relay sent all of the messages to {\it Q}
demonstrating the reliability of the relay mechanism we implemented.

\subsection{Image Processing}
\begin{figure}
\begin{center}
%\includegraphics[width=\columnwidth]{img_failure.pdf}
\includegraphics[width=5.0in]{img_failure.pdf}
\caption{{\bf Image Processing -- Failure}.  This graph shows the timeline of our image
processing application.  As can be seen, the managers were started in a
staggered fashion and the data generator was able to use new nodes as they came
online.  However, due to user error, one binding was added late and a small number of messages
never got converted.  In total, we processed 919 images successfully.}
\label{fig:img_failure}
\end{center}
\end{figure}

In this example, a data generator script takes 1000 images from the BXGrid
biometrics repository and converts them from TIFF to PNG.  In order to do this,
four user-created queues will be used: one queue {\it png-sink} on cclws14; and
on student00, student01, student02 queues called {\it convert-2-png}.  A data
generator program takes the 1000 images and creates a header/body pair.  Each
header specifies the ``target" as ``png-sink", and two custom fields:
``subject", which is ``convert-2-png", and ``outfile", which is the name of the
PNG to create.  The body of each message is simply the TIFF image.  The data
generator then calls {\tt smq\_put} for each message, with the destination
queue as any random {\it convert-2-png} queue.  The {\it convert-2-png} queues
each have a Ruby script bound to them that takes incoming messages, checks that
the subject is ``convert-2-png", and converts the message body (i.e. the TIFF)
to the PNG specified by the ``outfile" field.  Finally, the Ruby script bound
to the {\it convert-2-png} queue calls {\tt smq\_put} for each message, and
specifies ``relay" as the destination queue.  The messages are thus sent to the
local relay, which attempts to {\tt put} each message to {\it png-sink}.

Figure \ref{fig:img_failure} shows the results of running this image processing
example when one fault and one user error are encountered.  Note that the
manager on student00 is brought up first, followed by student01, and finally
student02.  As the managers are each brought up, the queues were created via
the {\tt smq\_create} utility, and the Ruby script was bound to each of the
{\it convert-2-png} queues using {\tt smq\_bind}.  As each subsequent {\it
convert-2-png} queue is brought online, it begins receiving messages from the
data generator, converting the TIFFs to PNGs, and putting new messages to the
local relays.  In each case, note how the number of messages in each relay
(except student01) eventually converges with the number of messages in each
{\it convert-2-png} queue, meaning that every incoming TIFF was successfully
converted.  The reason the curves for student01 do not converge is due to user
error.  Once a {\it convert-2-png} is created, the user only has a limited
number of time to setup the binding before the queue starts receiving messages
(this time depends on how often the manager updates with the catalog server,
and can be customized by the user).  Due to user error, one of the bindings was
not setup within this window, and the {\it convert-2-png} queue on student01
started to receive messages before the Ruby script was available to handle
incoming messages.  This is evidenced in the graph as the curves for student01
do not converge, and the final message count on {\it png-sink} is only 919.

This example exhibits the fault tolerance of the SMQ managers.  Though it is
not apparent in the graphs, the manager on cclws14 crashed (due to a memory
leak in one of the cctools functions).  The manager was manually brought back
online after a short amount of time, during which no messages were lost.  This
is due to the semantics of the relay script that ensure that all messages are
delivered.

\begin{figure}
\begin{center}
%\includegraphics[width=\columnwidth]{img_success.pdf}
\includegraphics[width=5.0in]{img_success.pdf}

\caption{{\bf Image Processing -- Success}.  This graph shows the timeline of our image
processing application.  As can be seen, the managers were started in a
staggered fashion and the data generator was able to use new nodes as they came
online.  In total, we processed 1,000 images successfully.}

\label{fig:img_success}
\end{center}
\end{figure}

Figure \ref{fig:img_success} shows the results of running this image processing
example when no issues are encountered.  This time, we made sure to bind the
scripts to the queues quickly, and the memory leak in the external library was
fixed.  Again, the manager on student00 is brought up first, followed by
student01, and finally student02.  As each subsequent {\it convert-2-png} queue
is brought online, it begins receiving messages from the data generator,
converting the TIFFs to PNGs, and sending new messages to the local relays.  In
each case, note how the number of messages in each relay eventually converges
with the number of messages in each {\it convert-2-png} queue, meaning that
every incoming TIFF was successfully converted.  Finally, note that {\it
png-sink} eventually contains all 1000 messages, i.e. 1000 PNGs, so no messages
were lost.

These image conversion examples demonstrate a number of the key features of
SMQ.  First, it shows how SMQ bindings fit into an active storage model.
Second, it shows the flexibility of the naming scheme: random {\it
convert-2-png} queues are chosen so the data processing is spread across
multiple nodes.  Finally, it shows how data processors (i.e. the queues with
bindings) can be added to the system on the fly to implement dynamic load
balancing.  This example also shows the main drawback of SMQ: for this example,
it took 1000 messages approximately 450 seconds to make it through the
dataflow, resulting in a throughput of 1.09 MB/s. If the same task is run
locally using a simple loop in the shell, it takes approximately 500 seconds
(the time taken to download the images from BXGrid, do the conversion, and
upload the images is taken into account).  However, despite the only marginal
performance gains, the many features of SMQ still make it an attractive option.

\subsection{Data Extraction}

This final example demonstrates the use of a somewhat more complex dataflow.
In this example, Weaver (the workflow compiler) is used to generate a Makeflow
that executes the following workflow:

\begin{enumerate}

\item{Tar up Folding@Home data located in AFS,}

\item{Perform an All-Pairs computation on the set of Folding@Home data (547x547),}

\item{Send results of comparison to SMQ message queues as they are produced.}

\end{enumerate}

Multiple queues performing the same function are used to receive messages from
Weaver.  As messages are received, a binding on each queue extracts the data
(i.e. message body) to a location on the Center for Research Computing (CRC)
storage that is publicly readable by the SMQ managers.  Each manager is run on
a CRC machine as a Sun Grid Engine (SGE) job.  Five managers were used to limit
the number of queues writing to the CRC Andrew File System (AFS) space.  Figure
\ref{fig:hfeng-timeline} shows the number of jobs submitted via Makeflow to the
SMQ queues, and the number of jobs running versus elapsed time.

\begin{figure}
\begin{center}
\includegraphics[width=5.0in]{hfeng-timeline.pdf}
\caption{Timeline showing the number of jobs submitted to the SMQ queues, the jobs running,
 and the jobs completed for the data extraction example.  Note the plateaus in the ``Submitted"
 and ``Complete" curves that correspond to time taken to {\tt flush} queues or restart Makeflow.
}
\label{fig:hfeng-timeline}
\end{center}
\end{figure}

These curves have some interesting features, namely the plateaus that occur
periodically in the ``Submitted" and ``Complete" curves.  The first plateau in
this curve was due to the fact that the number of files created reached the AFS
limit of 64,000.  As such, the messages were rejected by the managers, causing
the {\tt put} RPCs to fail, and messages becoming ``stuck" waiting to be sent.
At this point, the managers were brought offline and the {\tt flush} RPC was
implemented.  The managers were then brought back online, and the {\tt put}
RPCs began to succeed and messages were delivered.  No messages were lost due
to the reliable communication of SMQ and the ability to handle incremental
deployment of the managers, queues, and even new RPCs.  Subsequent plateaus
were due to reaching the file limit and {\tt flush} being called (manually), or
because Makeflow was restarted.

This example demonstrates the reliable communication, fault tolerance, and
performance of SMQ.  Despite a fault in the managers (i.e. the 64,000 file
limit) causing them to be killed, no messages were lost.  Furthermore, once the
{\tt flush} RPC was implemented, all messages were successfully delivered --
{\tt flush} does not destroy a message until all bindings on the queue have run
on that message.  Finally, we see that the total time to complete this dataflow
was less than 70 hours (under 3 days).  In a version of this dataflow that does
not use SMQ, the total time to execute the dataflow is estimated at 13 days (it
is still running at the time of this writing).

We also see some of the unique semantics of SMQ through executing this
dataflow.  In this case, SMQ provides similar functionality to the WorkQueue
master/worker.  However, if a worker dies in WorkQueue there is no notion of
persistent messages, resulting in data loss (until it is computed again).  SMQ
also acts like an active storage medium in which data (rather than data and an
executable) is sent to compute nodes (i.e. the queues with bindings), and
archived.  In this example, SMQ forms a strange hybrid between WorkQueue and
Chirp active storage where computation is mobile, but data storage is
persistent and computation is triggered based on events (i.e. receiving a
message) rather than a central manager.

%-------------------------------------------------------------------------------
% 5. Related Work
%-------------------------------------------------------------------------------

%\section{Related Work}

%WebSphere.  AMQP.  MSMQ.  Complicated, lots of specs, binary protocol.

%iRods.

%Chirp, WorkQueue.

%-------------------------------------------------------------------------------
% 6. Conclusion
%-------------------------------------------------------------------------------

\section{Conclusion}
\label{sec:conclusion}

SMQ provides users with the ability to develop and deploy programs with complex
dataflows.  There are many useful features of SMQ: persistent messages,
flexible routing/naming, and guaranteed message delivery are all provided.  The
tradeoff for getting these features is that performance can be relatively poor
depending on the size and type of workload.

Future work on SMQ includes adding more RPCs, e.g. to delete specific messages
or queues, etc.  Some form of authentication is also needed not only for
security but for sharing data and queues.  Improving performance is also one of
the key goals of any future work done with SMQ.  There are many areas where
performance gains could be realized, e.g. when a relay does a {\tt put} to a
queue it opens a {\tt link} and uses the network, even if the queue is local.
Copying the messages using the local file system would be much more efficient.
Finally, a scriptable client would allow users to open one connection to a
manager and execute multiple RPCs per session (rather than opening/closing once
for each RPC).

%-------------------------------------------------------------------------------

\end{document}
