
% Author: Ben Tatham, Carleton University
% Date: November 2006
%
% How to compile?
% 1. Do latex paper.tex
% 2. Do bibtex paper
% 3. Again, do twice latex paper.tex; latex paper.tex 
% 4  Do dvipdf paper.dvi

\documentclass[12pt]{article}
\usepackage{epsfig}
\usepackage{moreverb}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{pstricks, pst-node, pst-text}
\usepackage{multido}

\begin{document}

\title{Acquisition in Wireless Sensor Networks: Directed
Diffusion vs Pseudo-Distance Data Dissemination
	\footnote{The \LaTeX  source for this paper and the embedded figures can be
	found at \url{$http://triplipse.googlecode.com/svn/trunk/SensSimDoc/$}}} 
\author{Ben Tatham
	\footnote{tatham@ieee.org, The Department of Systems and Computer Engineering,
	Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada K1S 5B6}} 

\date{November 2007}

\maketitle

\begin{abstract}
{\em Directed Diffusion}\cite{DD} is a data-centric method of disseminating
packets in a wireless sensor network to sink nodes, using application layer
information to determine network routing while being reactive to
network changes. {\em Pseudo-Distance Data Dissemination}\cite{PDDD} by Lee and
Lee modifies DD for greater efficiency. Lee and Lee compare PDDD to DD on a
few key points in specific mobile network topologies.  This study aims to
extend their comparison to better explain how DD is so inefficient with data
traffic.  Further, I show that PDDD is also worthwhile for more fixed sensor
network topologies as well.  Finally, this paper presents the drawbacks of
PDDD, namely the extra memory required at each node, to perform the protocol.

\end{abstract}

\section{Introduction}

This work is a clarification, analysis, and extension of an article
entitled {\em Data Dissemination in Wireless Sensor Networks} written by Lee and Lee~\cite{PDDD}.

Unfortunately, the authors of PDDD do a poor job of explaining why DD is
inefficient and leave many critical details of their analysis to the reader.  This work
attempts for fill those gaps in the argument, and provide a better direct
comparison of DD to PDDD.  This paper quantifies the comparison in terms
of memory consumption per node.  Further, I present a modification to PDDD to
boost efficiency even further.

\subsection{Context}\label{sec:context}

Diffusion-style networking differs from traditional networking techniques in a
number of ways. These differences make it applicable to task-specific
networks only, like sensor networks, and not for general purpose networks.  

First, diffusion is data-centric, meaning all packet passing is done based
on named, or \emph{typed}, data.  Therefore, while it is definitely related to
traditional reactive routing protocols, it crosses the abstraction boundaries
up to the application layer.  Second, communication in diffusion is
neighbor-to-neighbor; it is difficult if not impossible to determine physical 
network topology, except in simulation enviroments.  Therefore, there are no routers in 
the network and there is no ``end'' of the network; each node is an ``end''. 
Because of the  neighbor-to-neighbor communications, nodes do not necessarily 
need globally unique addresses;  they only need to be unique among their neighbors.  

As mentioned above, diffusion is similar to more general reactive ad-hoc
routing protocols. However, unlike routing protocols, diffusion does not
attempt to find loop-free routes between two points.  It does not even try to
find a single ``best'' route between nodes.  Constrained flooding allows
messages to reach their destination, and through the use of reinforcement, the
empirically discovered best route is used most often, but is not explicitly used to choose the path of a packet.  
Message caching at each node is required for loop avoidance.

\subsection{Problem}

The problem that both DD and PDDD attempt to solve is to efficiently get data
from source nodes, where information about a sensed events is gathered, to sink
nodes, where it can be analyzed and further processed and forwarded to end
users.  Wireless sensor networks present key problems that effectively disallow
the use of general-purpose routing techniques.  Primarily, this involves power
constraints.  And among the power-hungry activities of a sensor node, radio
transmission tops the list.  Each bit transmitted consumes as much power as
2090 processor cycles on a mica2dot mote~\cite{Energy}.  So for simple analysis
and system development, it makes sense to focus on minimizing the number of
bits sent from each node.

Lee and Lee~\cite{PDDD} studied the performance of Directed Diffusion and
realized there was great room for improvement.  They are especially interested
in reducing the control message overhead that becomes even more significant
when nodes are mobile in DD.  

\subsection{Results}

The work of Lee and Lee is revisited in this paper.  I explored the network
lifetime differences and memory usages differences between DD and PDDD. 
Finally, I extend PDDD to remove acknowledgment messages between nodes. 

\subsection{Outline}

Section~\ref{sec:dd} summarized the key points of the Directed Diffusion (DD)
protocol, followed by a similar description of Pseudo-Distance Data
Dissemination in Section~\ref{sec:pddd}.  Lee and Lee's comparative results are
revisited and extended in Section~\ref{sec:comparison}.  I then describe the
simplified simulation recreated for this paper in Section~\ref{sec:simulation}.  Finally,
I present a minor modification to PDDD in Section~\ref{sec:modification} and
show the efficiency improvements it provides.  I conclude in
Section~\ref{sec:conclusion}.

\section{Directed Diffusion}\label{sec:dd}

\subsection{Overview}

Above, in Section~\ref{sec:context}, I described the basic concepts of
diffusion-based protocols.  DD is obviously based on these ideas.  DD begins by
sink nodes flooding interest messages into the network describing: 
\begin{itemize}
  \item The type of data of interest
  \item The the interval at which to send it and when it expires
  \item The geographic region of the network to send data from
\end{itemize}

As each node receives the interest message, it saves in memory a
\emph{gradient} which is used to determine where to forward data messages as
they arrive, or as they are generated locally.  Each gradient consists of
\begin{itemize}
  \item data rate and interest description
  \item direction (e.g. address of the node that the interest was
  received from)
\end{itemize}
The node then forward on the interest message, changing the source address to
its own.  As each node subsequently receives the interest, it is as if the
interest is the nearest neighbor.  Each node must avoid loops in the network be
remembering interests that have already been forwarded on.  In general, it
suffices to just remember only the most recent one as interests are not sent
that frequently, and the loop will occur after only one hop and the neighbor
forwarding the interest right back to the node.  

For the first pass, the sink sends out an \emph{exploratory} request such that
the interval is large and it expires relatively quickly.  Since the interest
message is flooding the network, this allows the network to use the real data to
indicate the best paths for data, which then allows the sink to
\emph{reinforce} these paths.  When the sink receives the first data message
from each sensor, it sends back a reinforcement interest via unicast, just to
the neighbor node that the data message came from.  When each subsequent node
receives a unicast interest message, it only chooses one other node to forward
to, thus propagating the empirically determined best path back to the source. 
While complex algorithms could be used to determine which path to reinforce,
they require more information than available in simple DD.  Therefore, it
chooses the node that sent the data message of the given type to that node
first.  Because of this requirement, and for data message path loop avoidance,
each node must maintain a relatively large list of information about recently
received data messages and the order in which they were received.

\subsection{Link Breakage Detection}
In DD, link breakage is detected by each node monitoring incoming data
messages, and knowing the gradient that they are coming in on.  In practice,
the node does not actually have to maintain the exact gradient that the sender
is sending with.  Rather, it can simply expect the same data rate of message
forever, because the sink will likely renew the interest before the gradient
expire.  

\subsection{Network Density}

One aspect of DD left out of both \cite{DD} and \cite{PDDD} is an analysis of
how node density affects performance.  Each node must keep information about
each neighbor node; therefore, higher node density, and thus the more
neighbors each node has, drastically alters that amount of memory required at
each node.  Multiply that times the data messages intervals and you have an
exponentially growing memory requirement at each node.  Therefore, DD is only
suitable for relatively low density and low data interval networks.  More
precise analysis will be shown in Section~\ref{sec:comparison} below.  

\subsection{Drawbacks of Directed Diffusion}

DD, while an innovative protocol for its time in 2003, has some key drawbacks. 
By the author's own admissions, DD was designed to optimize network reaction to
topological changes, often at the expense of more traffic. In practice,
breakages are caused by sensor nodes dying due to lack of energy. 
However, if sink nodes are mobile, as is often the cause in public safety
networks where rescue workers are moving through the scene of an accident, DD
requires that new interest messages be flooded into the network far more
frequently from the sink nodes.

In addition, DD is over-zealous in its transmission of data messages.  While
indeed, DD is extremely reactive to changes in the network, it wastes precious
power by sending data messages along multiple paths.  The authors of DD do
mention that low-level radio protocols may allow for a single radio-broadcast to
multiple network-unicast recipients, this is complex to handle and difficult to
achieve in all cases.  

\section{Pseudo-Distance Data Dissemination}\label{sec:pddd}

PDDD attempts to solve one particular drawback of DD: the excessive
interest flooding required for mobile sink nodes.  

\subsection{Overview}\label{sec:pddd-overview}

PDDD removes the gradient algorithm from DD.  Instead, it replaces it by each
node knowing which neighbor nodes are closer, the same, and further from the
each sink node. In my mind, PDDD presents a fairly simple concept in a fairly
complex way.  The presentation attempt to expose the basic ideas with and
simpify the language of the protocol definition.  

To achieve its goals, PDDD defines a new variable \emph{level}, which is calculated at
each node.  Each node also keeps a cache of the \emph{level} of each of its neighbors.
The \emph{level} consists of a few parameters, $L_i,sinkID=\langle\lambda,-\alpha,-\beta\rangle$, where:
\begin{itemize}
  \item $\lambda$: a distance metric, which is effectively pseudo-distance, or
  the number of hops to the sink.
  \item $\alpha$: the number of neighbors with lower $\lambda$, or in others
  words are closer to the sink
  \item $\beta$: the number of neighbors with same $\lambda$, or are the same
  number of hops to the sink
  \item $\nu_i$: unique id of the node.  
\end{itemize}

Lee and Lee do a poor job of explaining what pseudo-distance is in this paper. 
Their previous work in \cite{PDR} does a bit better job, but is not exactly
the same as the later work in \cite{PDDD}.  They complicate the explanation
with varying terminology through their explanation, calling it $\lambda$,
distance metric, and only on occasion \emph{pseudo-distance}.  This is
surprising since it is the keyword of the protocol itself.  

In any case, pseudo-distance is just the number of hops to the sink,
multiplied by a constant, $\delta$. Much discussion is given in \cite{PDDD}
explaining why the addition of $\beta$, or the number of nodes at the \emph{same} hop distance is important to the
protocol.  In terms of theory, it boils down to the different between a
partially-ordered graph and a totally-ordered graph.  In a partially-ordered
graph, each node only knows about the nodes closer to the sink, while when
totally ordered, it also knows the sibling nodes.  Figure~\ref{fig:tog} shows an
an arbitrary network on the left, and the totally-ordered graph layout of it,
if node 5 is a sink node.  

\begin{figure}
  \begin{center}
    \label{fig:tog}
	\input{tog.tex}  
  	\caption{Totally Ordered Graph}
  \end{center}
\end{figure}


\subsection{How it Works}\label{sec:pddd-works}
Similar to DD, PDDD starts off by the sink node flooding an interest message
to its neighbors. The interest message in PDDD adds some more information to
the existing content of DD:
\begin{itemize}
  \item Original sink address
  \item Level
\end{itemize}
As the nodes receive the interest message, they update the address field, while
leaving the original sink address in the message.  But they each node does
change the Level information to its own.  The sink node's level is always
$\langle0,0,0\rangle$, because it has 0 hops to itself and has no neighbors
closer or even the same distance to itself.  

In PDDD, each node within one hop to the sink detects a link breakage with the
sink via heartbeats sent by the sink.  The assumption is that sink nodes have
higher power.  The other nodes in the network use acknowledgement packets of
each data message to detect link breakage.  It struck me as odd that this part
of DD was changed; why introduce additional overhead of acknowledgements? 
While acknowledgement do provide a more robust breakage detection, being able
to detect breakage in either direction of data traffic, the original data
traffic message being forwarded on to another node should be sufficient to
detect breakage.  Section~\ref{sec:modification} describes in more detail my
proposal.  

\subsection{The $\delta$ Factor}

The constant factor, $\delta$, is needed to allow for new \emph{level}s to be
inserted into the network.  When a node discovers its own level has changed, it
can insert itself into the network with a pseudo-distance between its child and
parent nodes.  However, if the differene between child and parent node
\emph{level}'s is only one, then the node could not insert itself.  Therefore,
the $\delta$ factor is used to ensure that a node can insert itself. 
\cite{PDDD} does not discuss what the value of $\delta$ should be.  It is
indeed a difficult value to choose.  

The most robust way to choose it would be to choose it at run time.  For the
first pass of interest messages, all nodes use a $\delta$ value of 1.  Along
with the the initial data messages destined for the sink node in question, the
sensors send their pseudo-distance, based on $\delta=1$.  A new $\delta$ value
equal to the fields maximum value,say $2^31 - 1$, divided by the largest
pseudo-distance.  The sink can then propogate this new $\delta$ to all the
nodes, which is then trivial updates their pseudo-distance value via
multiplication.  

Choosing this maximum value allows the most number of nodes to be inserted into
the network later without changing the $\delta$ value.  However, it is
complicated to implement and prone to error should the network be unstable
during initialization.  Alternatively, $\delta$ can be chosen based on how
large the network planners expect the network to be.  For the purposes of
simulation, I chose an arbitrary value of 100, without loss of generality.  

\section{Comparison}\label{sec:comparison}
While Lee and Lee did do comparative simulations of DD and PDDD using NS-2,
they left out any analytical comparison.  This section attempts to provide a
more analytical approach to the comparison.

\subsection{Memory Usage}

As presented in the paper via simulation, PDDD clearly has less traffic than
DD.  But at what cost?  One of the trade-offs is in terms of memory usage per
node, especially as the number of neighbors and the number of sinks in the
network increases.  

The model for memory usage was determined through careful analysis during the
writing of the simulation code.  The memory statistics do not include code
space, but rather just the data structures required to keep track of neighbor
node statistics for \emph{gradients} or \emph{levels} for DD and PDDD,
respectively.  The assumptions are the addresses are 4bytes and timestamps are
8bytes.  I also assume that half of the neighbors will be upstream, and the
other half downstream.  This affects both protocols because each must keep
timers to determine if a link is broken or not, albeit timers with different
semantics.  See the protocol descriptions above for details.  

To be fair, Lee and Lee do limit PDDD for use when there is
a small number of sinks, but they do not quantify what \emph{small} is nor mention the number of neighbors
problem.  Figure~\ref{fig:memory} shows, in 3-D, the memory usage as the number
of neighbors and the number of sink nodes increases.  It is difficult to see
how the number of neighbors affects DD at all, but
Figure~\ref{fig:memoryByNode} shows more clearly how memory per node in DD also
increases, but at a slower rate than PDDD.  

While memory usage may not be the critical factor in designing sensor networks,
it is nonetheless important to keep in mind when choosing either the
acquisition protocols or the node hardware when designing extremely dense
networks.  

\begin{figure}
\begin{center}
\psfig{figure=memory.eps,width=12cm}
\caption{Memory comparison for DD and PDDD.}
\label{fig:memory}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\psfig{figure=memoryByNode.eps,width=12cm}
\caption{Memory comparison as number of neighbors increases.}
\label{fig:memoryByNode}
\end{center}
\end{figure}

\section{Simulation}\label{sec:simulation}

\subsection{SensSim}
For the purposes of this analysis, a simplified simulation engine was developed
in Java called \emph{SensSim}~\footnote{The source code for the simulator is
available at
\url{$http://triplipse.googlecode.com/svn/trunk/SensSim/$}, and can
be built quickly with Apache Maven}. The simulation progresses as a sequence of
ticks, with no real-world time associated with them.  During each tick, a node
processes any packets for it coming in on the links it belongs to, and in the
same tick, forward on the packets that were received.  Any new packets it needs
to inject into the network are also sent to the link.  The destination of each
packet will process it on the subsequent tick.  For example, a data message
that must traverse 10 nodes to get from source to sink, arrives at the
destination on the 9th subsequent tick.  The packets are cached in a Link object
between ticks.

The topology used for analyzing and learning about DD and PDDD is very simple: a
grid of nodes is created, with a given density.  A radio broadcast from a node
is heard by all eight of its neighbors, if they exist.  The user can specify a
node density which determines the percentage of occupied nodes in the grid. A
density of 100 ensures that every node in the grid exists.  One edge of the
grid is all sources and at the opposite edge,  half of the nodes are sinks. 
The density factor does not affect the source or  sink nodes. This topology is
similar to the one used in the original simulations  for Directed Diffusion~\cite{DD}. 

Each node object keeps track of basic statistics during the simulation.  When
complete, a set of cumulative statistics is built based on the various
parameters.  Naturally, a real-world environment could not collect the same
level of statistics about the underlying overheads of the protocols.  

\section{Modification to PDDD}\label{sec:modification}

As mentioned above, PDDD uses acknowledgment of each data message to detect
link breakage.  Many other wireless protocols use eavesdropping of packets to
monitor another node's status.  In sensor networks, the general case is to have
symmetric links with omni-directional antennas.  Therefore, rather than wait
for an explicit acknowledgement, the sending node need only to watch for the
same data message, identified by sequence number and source address, to be
transmitted to another node.  The monitoring node could go as far as making
sure the destination address is not one of its own neighbors, but this is not
technically necessary.  When the rebroadcast data message is heard, it can
cleanup its timers the same way that PDDD does when an acknowledge is received.

One drawback of replacing active acknowledgements with passive listening is
either receiving more bits, since data messages are typically larger than
acknowledgements, or complexity in the low-level software to stop listening to
data messages after the header fields.  However, it is likely that the software
would already handle partial packet listening anyway to save power.  It would
just be slightly more complicated to instruct the low level software to watch
for other headers besides destination address being the local address or
broadcast. 

Simulations of PDDD were run both with and without acknowledgements turned on. 
I analyzed the resulting average dissipiated energy per received packet per
node.  This is one of the parameters used in \cite{PDDD}.  Essenitally, it
represents how much energy at each node was consumed in a single data message
arriving at a sink, and is a good measure of how efficient the protocol is at
conserving data transmissions.  As expected, without acknowledgements, the
average dissipated energy is a constant value less than PDDD with
acknowledgements.  

\section{Conclusion}\label{sec:conclusion}
In this paper, I have provided detailed analysis of Lee and Lee's
Pseudo-Distance Data Dissemination protocol.  I have filled in analytical
holes in their initial presentation, most importantly about clarifying what is
``pseudo-distance'' and memory usage analysis. Further, the unexplained
$\delta$ factor value is resolved.  A simulation environment was developed to
fully understand the loose terminology provided in the original PDDD
document\cite{PDDD}.  Finally, PDDD was modified to remove acknowledgements
between sensor nodes, thereby reducing the overall energy consumption of the network. 

I believe PDDD to be a useful protocol for data acquisition in sensor networks,
with some minor modifications.  In addition to the removal of acknowledgements,
multi-path sending of data messages could be reduced by
choosing one node to forward data messages to toward the sink.  Further work
is required to determine how this would impact the reactive nature to link
breakages in the network.  

 \section*{Acknowledgements}

The author would like to thank Professor Michel Barbeau and the students of
COMP5402 in Fall 2007 for their help and breadth of experience in all aspects
of wireless networking.

\bibliographystyle{plain}

\bibliography{mybib}

\end{document}

