\documentclass[10pt,a4paper]{book}

\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{listings}
\usepackage{adjustbox}

\author{Frans Schneider}
\title{Erlang implementation of the Real-time Publish Subscribe (RTPS) protocol)}
\date{\today}

\begin{document}

\maketitle
\tableofcontents

\lstset{
  language=erlang,
  basicstyle=\footnotesize,
  breaklines=true,                 % sets automatic line breaking
  captionpos=b,                    % sets the caption-position to bottom
  keepspaces=true,                 % keeps spaces in text
  tabsize=4
}

\chapter{Introduction}

This text is the result of the notes taken while writing the code 
and is incomplete, not well structured, repetitive and totally indigestible.

The RTPS protocol is defined by the Object Management Group and its
current version is 2.2 as of September 2014 can be found in the 
\href{https://www.omg.org/spec/DDSI-RTPS/About-DDSI-RTPS/}{DDS
  Interoperability Wire Protocol Specification Version 2.2}. Please note
that the RTPS specification is very much based on the Object Oriented
paradigm, which results not only in a particular design but also
introduces many terms related to OO in its description. This Erlang
implementation is according to the nature of Erlang NOT object oriented,
but still does use many terms from the specification. The following
description is only concerned with how the protocol is implemented in
Erlang and will not describe the protocol itself. Where applicable, text
from the specifications may be copied verbatim or with some changes from
the specifications without further citation; i.e. we shamelessly make
use of the work of others.

The main purpose of this implementation is, besides offering a working
and usable RTPS protocol stack, to offer a system which is easy to
understand, make changes to and use for researching the behavior of the
protocol. Other implementations, whether or not freely available, are
complex pieces of code with often many dependencies on other, mostly
also complex, third party libraries and code. Getting acquainted with
these systems is already hard, let alone fully understanding how the
RTPS specifications relate to the code and making changes to it is
extremely difficult. We think our implementation is easy to deploy,
understand and use for experiments and still offering a full set of
features.

First, the components of the application are listed and their
interdependencies and hierarchy are shown. As with every Erlang
application, supervision and process control are main design principles
and therefor should be made explicit. Next, for each relevant part of
the application a more detailed description of the implementation is
given. However, before really starting out, we will list some common
concepts and terms first:

\begin{description}
  
\item[Transport] The transport is what takes data from A to B. For
  RTPS, the minimum required transport is the UDP multi-cast IPv4
  connectionless protocol, referred to as RTPS-UDP. Applications may
  use different forms of transportation, such as TCP/IP, as
  needed\footnote{It is however a little weird to implement a protocol
    which is supposed to solve reliability issues with the unreliable
    UDP protocol on top of a reliable protocol. One would use TCP/IP
    because it is routable.}.
  
\item[Locator] This is the address used by the transport. For RTPS-UDP
  this is a designation of the protocol (udp plus ipv4), the network
  address and the port used. So, a locator is a combination of a
  protocol designator and an address.
  
\item[Domain] Parties within the same domain share the same address
  space, i.e. they can discover their existence and communicate with
  each other. All parties within a domain share at least a common
  transport mechanism and each of them has a unique id. It is not
  possible to address parties in different domains.
  
\item[Participants] Container of all RTPS entities that share common
  properties and are located in a single address space.
  
\item[Endpoint] An endpoint is the entity within a participant which
  sends or receives cache changes. Each endpoint has a unique id
  within a participant and with the unique id of the participant, this
  forms the Global unique id (GUID) within a domain. A GUID is unique
  within a domain but the same GUID could be used in another domain.
  
\item[Writer] Specialization of RTPS Endpoint representing the objects
  that can be the sources of messages communicating cache changes.
  
\item[Reader] Specialization of RTPS Endpoint representing the objects
  that can be used to receive messages communicating cache changes.
  
\item[History cache] The history cache is used to temporarily store
  and manage sets of changes to data-objects. On the Writer side it
  contains the history of the changes to data-objects made by the
  Writer. On the Reader side it contains the history of the changes to
  data-objects made by the matched RTPS Writer endpoints.
  
\item[Cache change] Represents an individual change made to a
  data-object. Includes the creation, modification and deletion of
  data-objects.
  
\item[Data] Represents the data that may be associated with a change
  made to a data-object.
  
\end{description}

\section{Application layout}

The RTPS implementation is shown in ~\ref{figure:overall} is an Erlang
supervised application which runs a supervisor \lstinline{rtps_sup} 
which on its turn starts a controlling process \lstinline{rtps} which
is used for starting and stopping the main functions of the
application. The \lstinline{rtps} module also implements the exported
API used by user applications. The \lstinline{rtps} process is a named
process in the application allowing easy access to the API.

\begin{figure}
  \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{supervision-tree}
  \caption{Overall structure of the RTPS application}
  \label{figure:overall}
\end{figure}

Also started by the \lstinline{rtps_sup} supervisor are a one-for-one
supervisor which will contain the domains and a registry which keeps
the association between domain ids and the pids of the domain
controllers.

\subsection{Domain}

Domains, identified by their \lstinline{domain_id}, are implemented as
a domain supervisor \lstinline{rtps_domain_sup} which is added to the
previously mentioned one-for-one supervisor which is started from the
\lstinline{rtps_sup} supervisor. The \lstinline{rtps_domain_sup} is a
one-for-all supervisor which supervises the domain controlling process
(\lstinline{rtps_domain}), a one-for-one supervisor used to maintain
the participants within the domain, a registry to keep the association
between between participants and their controlling processes and a
transports supervisor. In , the domain supervisor is shown as the
ellipses 0, 1, \ldots{} .

The controlling process \lstinline{rtps_domain} implements the required
functionality of a domain including adding and deleting participants.

Each domain has its own transports supervisor which will hold all
transports used within a domain. Transports in a domain can be shared by
different endpoints within the domain but never betweeen endpoints from
different domains. Every transport is associated with a locator.

\subsection{Participant}

The \lstinline{rtps_domain_sup} starts a one-for-one supervisor which holds a
one-for-all supervisor \lstinline{rtps_participant_sup} for every participant
contained by the domain. A participant is identified by a
\lstinline{guid_prefix}, which must be unique within the domain over all
networked nodes involved. It is the responsibility of a user application
to assign a unique \lstinline{guid_prefix} on creating a participant.

The participant consists of the participant controlling process
(\lstinline{rtps_participant}), a one-for-one supervisor for the endpoints
belonging to the participant, a registry to keep track of the endpoints
and the two simple participant and endpoint discovery processes
(\lstinline{rtps_spdp} and \lstinline{rtps_sedp}.)

The participant controller's main task is starting and stopping
endpoints. The simple participant discovery protocol (SPDP) process
makes the existence of the participant known to other participants
within the domain and monitors the domain to find and keep track of
remote participants. On discovering a new remote participant, the simple
endpoint discovery protocol (SEDP) process will actively monitor the
endpoints of a remote participant and make information available on the
local endpoints.

\subsection{Endpoints}

Endpoints come in two flavors, writers and readers. Again, each
endpoint is implemented as a one-for-all supervisor with a controlling
writer (\lstinline{rtps_writer}) or reader (\lstinline{rtps_reader})
process, a registry to keep track of the endpoints and a one-for-one
process to supervise the locators and proxies used by the writers and
readers. Endpoints are identified by their \lstinline{entity_id} which
contains fields indicating whether the endpoint is a writer or reader,
its type and a \lstinline{entity_key} to distinguish between similar
endpoints. The participant's \lstinline{guid_prefix} plus the
endpoint's \lstinline{entity_id} together form the unique
\lstinline{guid} of each endpoint in a domain. The \lstinline{guid}
allows for addressing each individual endpoint within a domain.

Depending on the writer's type being stateless or stateful, a writer
will make use of locators or proxies to communicate with remote readers.
A reader, in case of being stateful, will use a writer proxy to connect
to a remote writer.

Each writer is associated with a history cache which is the actual
interface between your application and the RTPS implementation. Your
application must serialize the data to be published using some form
you feel is appropriate, after which you request the writer to
allocate a sequence number to use for storing the serialized data in
the cache and store that data as a so-called \lstinline{cache change}
in the history cache. As far as your application is concerned, the
writer is only responsible for allocating sequence

Every reader is also associated with a history cache which is filled by
the reader or the proxy associated with it, with the cache changes
received from remote writers. The history cache, again, forms the
interface between a user application and the RTPS application.

\subsubsection{Locators and proxies}

Except for a stateless reader, writers and readers make use of locators
or proxies to communicate with remote peers. A locator is a transport
very similar to a wireless broadcast and receiving scenario: the
publisher (writer) broadcasts the cache changes without knowing or
caring which parties are 'tuning in' while the subscribers (readers)
receive all cache changes from all subscribers that use the transport
for sending. In this set-up, writers and readers are stateless and keep
no knowledge on the remote side.

Proxies on the other hand are associated with each individual remote
peer. A writer uses a 'reader proxy' to communicate with a particular
reader. A reader will use a 'writer proxy' to do the same.

As mentioned before, every endpoint has a one-for-one supervisor used
for the locators and proxies used by the endpoint.

\section{Data-flow}

A user application writes data to the RTPS application for publishing.
As mentioned before, this requires the user application to request a
sequence number from a RTPS writer, serialize the data to be published
and store the combination of sequence number and data as a cache change
in the history cache. At the receiving side, RTPS either notifies the
user application data is available or the user application polls for the
availability of new data, after which the user application fetches the
data from the history cache, either leaving the data in the readers
cache for future use or indicating to the reader the data no longer is
needed. The user application takes the cache change, extracts the
serialized data and deserializes.

The user application writing a cache change to the history cache,
indicates to the writer the data should be processed further. It now
will depend much on whether the writer is stateful or stateless and the
reliability level is best-effort or reliable what will happen next,
including running filters, rearranging cache changes according to some
priority, splitting cache changes to fit within some operating limits
and keeping cache changes for retransmission etc., all done in the
writer's reader locators or reader proxies with the writer process
itself having little or nothing to do with all this processing. All
processing results in a bunch of RTPS sub-messages containing the cache
changes and housekeeping information destined for a particular
transport.

A transport can either be shared by more than one writer or dedicated to
a single process, which is under control of the user application while
configuring RTPS. The transport is fed a stream of RTPS sub-messages and
will aggregate these sub-messages into the actual RTPS messages which
will enter the network. Aggregating these sub-messages is bound by a
number of rules, such as maximum message size, adding addressing
information, time-stamping etc. This is also the point within the
data-flow during publishing that more than one source may come together.
Because the network is considered a limiting resource, the aggregation
process of the transport is also the designated place to optimize the
message assembly by cleverly combining sub-messages to reduce network
overhead. With the resource limited network as its output, the transport
aggregating process is the logical buffering pivot point in the
data-flow.

A transport is of a particular type (RTPS-UDP) and uses a defined
address, all given by the locator. The transport is implemented as a
process which opens a network socket and will do the sending and
receiving of RTPS messages. The transport starts a child process, the
transport aggregation which does the collecting of the sub-messages: it
is this process which the reader locators and writer and reader proxies
talk to for sending the sub-messages and not the transport process
itself.

The modules involved with sending are:

\begin{itemize}
\item
  \lstinline{rtps_transport} and
\item
  \lstinline{rtps_sender} which is the process aggregating the RTPS messages.
\end{itemize}

On receiving RTPS messages, the transport process will inspect the
message to see if it complies with some rudimentary rules and pass it
on to the transport receiver, which is another child process of the
transport process. It is the task of the receiver to take apart the
RTPS message and distribute the sub-messages to the endpoints they are
supposed to go to. The receiver process acts as the buffering pivot
point in the data-flow while receiving data.

In the sending data-flow of the RTPS application, the
\lstinline{rtps_sender} is the designated buffer while for the
receiving data-flow, the buffer is the \lstinline{rtps_reveiver}. All
other interfaces between modules are considered having no constraints.

The modules concerned for receiving messages are:

\begin{itemize}
\item
  \lstinline{rtps_transport} and
\item
  \lstinline{rtps_receiever}
\end{itemize}

The RTPS receiver \lstinline{rtps_receiver} is part of the
specification of the protocol and follows a set of rules as set in the
specifications. The sender \lstinline{rtps_sender} is not part of the
specifications but is the logical counter-part of the receiver and
follows similar rules like the receiver. The \lstinline{rtps_sender}
probably is the most critical module as far as performance is
concerned.

\subsection{Some remarks concerning transports}

As mentioned before, transports are kept local to a domain with each
domain having a transport supervisor for the transports used within that
domain. This is a logical consequence of domains being totally separate
and independent address spaces. Participants from one domain can never
ever communicate with participants from another domain and therefor it
is unneeded and undesirable to share transports between domains. In case
data must be transferred from one domain to another, an application may
connect to these domains directly and act as a bridge between them.

The RTPS standard specifies RTPS-UDP as the transport type each
implementation of the protocol should support for interoperability.
RTPS-UDP is multi-cast UDP on IP version 4 network using a default
broadcast IP address and a set of port number assigned according to a
set of rules. Each domain becomes a set of two or more ports based on
its domain id, a port for discovery and at least one port for user data.
Because the assignment of port numbers is predefined, network nodes are
able to find remote nodes within a domain.

The RTPS standards allows for other transport types beside RTPS-UDP to
be implemented which many vendors also actually do., most notably TCP.
TCP allows in contrast to UDP better routing which may be desirable if
the network topology is more complex. However, TCP is a reliable network
protocol which RTPS tries to be itself, which is a little double: i.e.,
why implement some protocol to circumvent network unreliabilities on top
of a reliable network protocol? Also, RTPS is for reasons of efficiency
intended to use the publish and subscribe pattern with many receivers
listening to (concurrent) publishers. TCP for one is definitely not a
multi-cast protocol and on top of that. Last but not least, TCP is
connection oriented and not a packet based protocol while RTPS is
definitely message oriented. In summary, TCP seems not to be a
particular good choice as a RTPS transport type, except for routing
purposes. A protocol such as SCTP may be a better choice but is not
supported as commonly as TCP is.

\section{Summary}

We introduced both a first hierarchical overview of the RTPS application
as it is implemented in this project and gave a short description of the
data-flow within the application. The application uses several levels of
supervision and separate processes where appropriate to allow the
concurrent processing of data and minimize the consequences of failures.
Within the two sending and receiving data-flows, there are two clear
pivoting points that act as buffers, the rtps\_sender and rtps-receiver
respectively.

The RTPS
\href{https://www.omg.org/spec/DDSI-RTPS/2.2/PDF}{specifications}
gives many many more details on the protocol. The following sections
will discuss the implementation of the Erlang modules that make up the
application in more detail.

\chapter{Implementation details}

Not every single detail of the implementation will be discussed but only
the more interesting aspects that may not be obvious right away, choices
made that require some explanation and points of interest for further
development or experiments. For more information on the modules, have a
look at the Erlang generated documentation. You probably need to study
some learning material first if you are not yet familiar with Erlang
before continuing.

The modules are discussed more or less according to the hierarchical
overview given in the introductory chapter.

\section{Limitations}

The following hard coded limitations apply:

\begin{enumerate}

\item
  The number of domains is limited to 233 domains due to the limited
  IPv4 port range. Domain id above 232 will result in a port above
  0xFFFF;

\item
  The number of participants is limited to 120 to prevent port
  overlays between
  domains\footnote{\href{https://community.rti.com/kb/what-maximum-number-participants-domain}{What
      is the maximum number of participants per domain?}};

\item
  Only UDP/IPv4 and UDP/IPv6 are currently implemented;

\item
  Fragmentation is not (yet) implemented, limiting data content size by
  the used MTU and to less then 64Kbytes max.

\end{enumerate}

\section{Supervision strategies}

There are several levels of supervision in the application, which may be
a little confusing. Supervision is used to deal with failures and to
start and stop parts of the application, such as starting and stopping
domains, participants etc. The supervisors which supervise a domain,
participant or endpoint plus the overall supervisor use the one-for-all
restart strategy. If either one of the processes which such a supervisor
contains fails, all processes must be restarted. If the controller fails
it will loose all knowledge accumulated during its lifetime and therefor
all brother and sister processes should be restarted. If the registry
fails, it looses track of all siblings which also is part of the
accumulated knowledge etc. The used supervision hierarchy takes also
care that all related processes are started but also stopped as needed:
stopping a domain for example will terminate all transports,
participants endpoints etc, within that domain without effecting other
domains.

The domain, participant, endpoint and locator/proxy supervisors are each
under the control of a one-for-one supervisor. If for example a domain
crashes, only that domain is to be restarted without further effecting
other domains. Domains, participants etc, are fully isolated. The
one-for-one supervisor is implemented for all levels within the
hierarchy using the same \lstinline{rtps_ofo_sup} module.

\section{Registry}

Why use registries instead of OTP's local name registry? The most
practical reason is that the local name registry works with atoms only
and turning the ids used, which would be composed of the domain id for
domains, the domain id plus the \lstinline{guid_prefix} for
participants and the \lstinline{domain id}, \lstinline{guid_prefix}
and \lstinline{entity_id} for endpoints etc, would result in a growing
number of atoms and extra processing for calculating the atoms.

Another good reason to put registries within the domain, participant and
endpoint is that the ids registered are only relevant for the enclosing
entity and are never used outside the context of the entity.
Participants are only relevant within the domain and endpoints are only
relevant within a particular participant.

The extra bonus is that using a registry within a context is that the
number of registered ids will be kept smaller per registry than with a
global registry and lookups can run in parallel per context.

The added complexity of implementing and managing these registries is
relatively small.

\section{History cache}

The history cache has three parameters which determine how it behaves.

\begin{itemize}

\item
  The \lstinline{kind} is either \lstinline{keep_last} or \lstinline{keep_all};

\item
  The \lstinline{depth} parameter is only relevant if kind is
  keep\_last and sets the maximum amount of most recent cache changes
  the history cache will keep. If undefined, there will be no limit
  set and cache changes are stored as long as the system doesn't run
  out of memory. A typical value would be 1, which will result in the
  history cache only keeping the most recent cache change, much like a
  simple variable;
  
\item
  The \lstinline{duration}\footnote{See description of the DURABILITY
    QoS policy in the Data Distribution Service, v1.4 specs p93 }
  determines if cache changes are more or less volatile.

\end{itemize}

Transient and transient\_local in this implementation are not different
because the history cache is implemented as a separate, independent,
process and its existence in association with an endpoint only depends
on whether the cache was started explicitly or implicitly by the
endpoint. If the endpoint is started without passing a history cache as
a parameter, it will start a cache by itself and, on terminating that
endpoint, the history cache will also be discarded, making for a 'local'
transparent cache. If however a cache is created separately and passed
into the endpoint, the cache will stay in existence after the endpoint
terminates and can be reused, making for the `transient' cache.
Transient and transient\_local are treated the same in the
\lstinline{rtps_history_cache} module.

\section{Receiving messages}\label{sec:receiving-messages}

The receiver disassembles the received RTPS message using the embedded
info-* sub-messages to augment the data carrying messages, i.e. the data
and gap sub-messages. The data and gap messages carry information on the
cache changes published by the writer and received by the reader.

\subsection{Distributing the submessages}

NB: The following text (rumblings) are the remains of notes taken while
implementing the code.

The receiver has to forward these cache change related sub-messages to
the receiving endpoints. The current approach is to have each endpoint
register itself with a transport, the transport adding the endpoint to
the list of potential receiving processes. On receipt of a RTPS message,
this list is forwarded to the receiver alongside the RTPS message after
which the receiver disassembles the RTPS message and uses the list to
find the pids of the endpoints.

Sub-messages should only be distributed to endpoints that have connected
to a transport. The issue is, should the list of endpoints contain the
entity\_ids only or pids as well. In the first situation, it is assumed
that the endpoint's pid is looked up when needed which allows for to be
restarted or moved to another process. In the second case, the pids are
used directly. The question is, do we gain something from looking up the
entity ids for every RTPS message or can we just use the pids of the
endpoints?

\begin{itemize}

\item
  Restarting an endpoint will have the endpoint re-opening the
  transport, so there is no need to look up the pid;

\item
  If a transport goes down, it should take with it all endpoints
  connected, i.e. endpoints do not re-register.

\end{itemize}

Conclusion: just use the pids of the endpoints directly.

For the list of endpoints the receiver uses for distributing the
received messages, the following rules apply:.

\begin{enumerate}

\item
  All endpoints have their local guid set;

\item
  A stateless best-effort writer has its remote guid set to
  \lstinline{undefined}.  It will broadcast cache changes and never
  ever receive anything;
  
\item
  A stateless reliable writer has its remote guid set to
  \lstinline{unknown}. It does broadcast its cache changes but can
  also receive repair requests from any remote endpoint;
  
\item
  A stateful best-effort writer has its remote guid set to
  \lstinline{undefined}.  It sends the cache changes to a specific
  remote reader, may filter on behalf of the remote reader, but will
  not accept any incoming messages;
  
\item
  A stateful reliable writer has its remote guid set to the actual guid
  of the remote reader. The writer will send the cache changes, after
  eventual filtering on behalf of the remote reader, to the specified
  remote guid and process incoming submessages from the remote reader;
  
\item
  A stateless best-effort reader has its remote guid set to
  \lstinline{unknown} since it will accept accept cache changes from
  any writer. It will however only process cache change messages and
  not any of the house keeping messages, but that is not the
  responsibility of the receiver tot determine what submessages are
  forwarded to the stateless best-effort reader;
  
\item
  There is no stateless reliable reader;
  
\item
  A stateful best-effort reader has its remote guid set to a specific
  remote writer's guid. It will accept cache changes from only this
  remote writer. This type of reader will not communicate back to the
  writer, but that is of no concern to the receiver;
  
\item
  A stateful reliable reader also has its remote guid set to a specific
  remote writer's guid.

\end{enumerate}

An incoming submessage always has its source guid set. The destination
guid can either be the exact guid of the endpoint the submessage is
intended for or, in case of a broadcasted submessage, can have the
value \lstinline{unknown} or have its guid prefix set to
\lstinline{unknown} and its entity id set to a specific value. A truly
'dumb' stateless writer has no clue on what remotes might be
interested and will send submessages with the destination guid set to
\lstinline{unknown} and it will be up to the readers to determine if a
submessage is relevant or not.  The specs do describe such a 'dumb'
stateless writer in which the destination guid always will be
\lstinline{unknown}
\footnote{It will be relatively easy to add functionality to the
  stateless writer to add the destination's entity id.}.

The specs
\footnote{Also see Table 8.44 - Possible combinations of attributes
  for a matched RTPS Writer and RTPS Reader (p69)} do allow for the
following combinations of writers and readers because, basically, a
reliable reader requires the writer to be reliable as well.  All other
combinations are possible as long as the topic kind of both writer and
reader are the same. Since the topic kind is not a parameter
considered by the receiver, it is not considered for distribution. The
following table ~\ref{tab:writer_reader_comb} shows the combinations
possible and the destination guid the writer uses and the remote guid
the reader has defined.

\begin{table}
  \centering
  \rotatebox{90}{
    \begin{minipage}{\textheight}
      \begin{tabular}{|l|l|l|c|l|l|}
	\hline 
	Writer & Remote guid (see A) & Destination guid & Direction & Reader & Remote guid (see B) \\
	\hline 
	Stateless, best effort & `undefined` & `unknown` & $\rightarrow$ & Stateless, best-effort & `unknown` \\
	\hline 
	Stateless, best-effort & `undefined` & `unknown` & $\rightarrow$ & Stateful, best-effort & GUID\_writer \\
	\hline 
	Stateless, reliable & `unknown` & `unknown` & $\rightarrow$ & Stateless, best-effort & `unknown` \\
	\hline 
	Stateless, reliable & `unknown` & `unknown` & $\rightarrow$ & Stateful, best-effort & GUID\_writer \\
	\hline 
	Stateless, reliable & `unknown` & `unknown` & $\leftrightarrow$ & Stateful, reliable & GUID\_writer \\
	\hline 
	Stateful, best effort & GUID\_reader & GUID\_reader & $\rightarrow$ & Stateless, best-effort & `unknown` \\
	\hline 
	Stateful, best-effort & `undefined` & GUID\_reader & $\rightarrow$ & Stateful, best-effort & GUID\_writer \\
	\hline 
	Stateful, best-effort & `undefined` & GUID\_reader & $\rightarrow$ & Stateful, best-effort & GUID\_writer \\
	\hline 
	Stateful, reliable & GUID\_reader & GUID\_reader & $\rightarrow$ & Stateless, best-effort & `unknown` \\
	\hline 
	Stateful, reliable & GUID\_reader & GUID\_reader & $\rightarrow$ & Stateful, best-effort & GUID\_writer \\
	\hline 
	Stateful, reliable & GUID\_reader & GUID\_reader & $\leftrightarrow$ & Stateful, reliable & GUID\_writer \\
	\hline 
      \end{tabular} 
      \caption{Writer and reader combinations}
      \label{tab:writer_reader_comb}
    \end{minipage}
  }
\end{table}

Note A: This remote guid is the guid the writer will accept house
keeping submessages from.
Note B: This remote guid is the guid the reader will accept cache change
submessages from.

The different guid involved in distribution:

\begin{itemize}

\item
  \lstinline{src_guid} is where a message or request comes from. This is always a
  fully qualified guid because it is defined by the participant and
  endpoint combination which has emitted the message or request and is
  embedded in the message or request. Every endpoint has its own guid.
  The receiver determines the \lstinline{src_guid} for every incoming message or
  request;
  
\item
  \lstinline{dst_guid} is where a message or request is send to. In case of a
  pure broadcast, the destination guid is \lstinline{unknown}. In case the message
  or request is intended for a particular remote participant's endpoint,
  the destination guid is fully qualified. There are two more options,
  in which messages and requests are addressed to specific endpoints in
  all participants or all endpoints in a particular participant. The
  first of these two options is a valid option, but the second one is
  disputable and not further considered. For the first option, the
  \lstinline{dst_guid_prefix} is \lstinline{unknown} and the \lstinline{dst_entity_id} is
  defined.
  The receiver determines the \lstinline{dst_guid} for every incoming message or
  request;
  
\item
  \lstinline{local_guid}. This is the guid of the local endpoint and thus is
  always fully qualified. For every local endpoint the receiver keeps a
  reference to the endpoint including the \lstinline{local_guid};
  
\item
  \lstinline{remote_guid}. An endpoint either knows what remote endpoints it is
  associated with or not and from which endpoints it may expect incoming
  messages or requests if any. An endpoint may well be unable to process
  incoming requests, as in the case of a stateless best-effort writer,
  in which case the remote guid is \lstinline{undefined}.
  The receiver keeps the endpoint's \lstinline{remote_guid} in the same endpoint
  reference mentioned earlier. In case the \lstinline{remote_guid} is
  \lstinline{undefined}, the endpoint should not be considered while processing
  incoming messages or requests;
  
\item
  When multicast loopback mode is set, RTPS messages sent are also
  received. This mode is allowed but should be used only for testing
  purposes. It must be prevented that the publisher receives its onw
  messages.

\end{itemize}

An endpoint with \lstinline{rmt_guid} set to \lstinline{undefined}
should not be considered by a receiver.

Reliability mode:

\begin{itemize}

\item
  best-effort for a writer means it will except no input. This is
  relevant for the receiver;

\item
  best-effort for reader means it will not send output. Not relevant for
  receiver;

\item
  reliable for writer means, it will except input. Relevant for
  receiver;

\item
  reliable for reader means it will send output. Not relevant.

\end{itemize}

The reliability mode determines the \lstinline{remote_guid} used when opening the
transport, which is set to \lstinline{undefined} in case of best-effort. In case
the reliability level is set to reliable, the \lstinline{remote_guid} is set to
either \lstinline{unknown} or the remote's guid or may have the remote guid prefix
set to \lstinline{unknown} and the \lstinline{entity_id} set to a particular value,
depending on the the state of the endpoint.

\subsection{Accumulation per endpoint}

Every time the receive loop is processed, a submessage becomes available
which can be destined for one or more endpoints. Such a submessage
should be appended to the list of already accumulated submessages per
endpoint in the order they are received.

The order of the submessages is significant. The sequence number in DATA
submessages must be ascending, especially for best-effort readers and
HEARTBEAT and ACKNACK messages have a count filed which must be
ascending as well. The order the submessages appear on the wire must be
preserved but can be accumulated per type. The accumulated submessages
can be passed on to the endpoint after being accumulated and be
processed as a whole. The order in which the submessage types are passed
on to the endpoints is also significant. For a reader, the order is:

\begin{enumerate}

\item
  DATA and GAP submessages, the order of which probably is less
  relevant. Putting the GAP messages first may mark cache changes as
  irrelevant while a DATA submessage could carry the cache change prior
  to becoming irrelevant. This however is very unlikely to occur within
  one single RTPS message;

\item
  The HEARTBEAT message should be processed after the DATA and GAP
  messages because it is used to determine what cache changes are still
  missing. In theory, a RTPS message may contain more than one HEARTBEAT
  from the same writer. The last HEARTBEAT is the newest HEARTBEAT.
  Because a reader may observe a heartbeatSuppressionDuration, only the
  last HEARTBEAT from the same writer in a RTPS message should be
  forwarded to a reader endpoint;

\item
  For fragments, the same order as for regular cache changes should be
  observer: data first followed by heartbeat.

\end{enumerate}

The writer can receive ACKNACK messages. Order is important and should
be kept. In case a single RTPS message contains more than one ACKNACK
originating from the same reader and destined towards the same writer,
these messages may contain different information. All ACKNACKs must
therefor be forwarded, even considering the existence of the
nackSuppressionDuration parameter.

\section{Fragmentation}

RTPS sets a limit to the maximum size of the RTPS message, being 64 KB.
Some of that size is set apart for the RTPS message and submessages
overhead, which leaves just about 64 KB, but not all. Any content larger
than this approximate 64 KB will have to be fragmented.

Than there is the transport medium which may further limit the room
available for content. The standard UDP message size is, as the RTPS
message, 64 KB minus a few bytes for the IP header and UDP header. The
RTPS message should fit in this, which will limit the content size a
little more. However, the size of UDP packets is often made much smaller
to prevent fragmentation of the packet at the network level. A common
UDP message size is 1500 bytes. Taking into account the IP, UDP and RTPS
overhead, this will leave some 1396 bytes for the content.

Than there is another limitation in case the the Inline\_qos field is
filled. Depending on the settings of the writer, the current Inline\_qos
must be added to every data message. The Inline\_qos is a list of
parameters, which may change in number of the parameters and the size of
some of the parameter values.

Taking in account the transport's maximum size, overhead and the
existence of the Inline\_qos field, a maximum available size for the
content can be calculated. If the available size is large enough to
accommodate the data, a single data message suffices, or if not enough
room is available, the data messages must be fragmented.

There is a limitation to the size of the content even when using
fragmentation due to the size limit of the unsigned 32 bit
lastFragmentNum filed in the HeartBeatFrag Submessage, which sets the
lower bound to the content size to 4 TB.

Fragmentation is not (yet) implemented. The specifications are a little
confusing on this subject and it is not yet clear how to fit
fragmentation into the current design.

Het probleem bij fragmentatie is om te bepalen op welk niveau de
fragmentatie plaats vindt. Het meest logische niveau is het transport
niveau omdat fragmentatie afhangt van het transport medium en dat is
direct gelieerd aan de rtps\_sender. Het probleem dat daarbij ontstaat
is dat het protocol voorschrijft dat ook een NackFrag afgehandeld moet
kunnen worden, zonder daarbij aan te geven onder welke configuraties. In
zijn algemeenheid geeft men aan dat de heartbeatfrag en nackfrag
berichten overeenkomen met de heartbeat en acknack berichten. Heartbeat
en acknack berichten zijn echter alleen van toepassing als de writer
geconfigureerd is voor reliable communicatie, waarbij de writer ook
binnenkomende berichten kan verwerken in tegenstelling tot best-effort,
waarbij de writer geen binnenkomend verkeer afhandelt.

Als fragmentatie wordt afgehandeld op transport niveau, dan zou je
verwachten dat nackfrag berichten ook op het niveau van de transport
worden afgehandeld, maar dat is dus niet het geval. Een nackfrag wordt
op het niveau van een reader locator of reader proxy afgehandeld,
waarbij het antwoord bestaat uit een datafrag bericht met de fragmenten.
De fragmenten moeten uit de cache changes worden gedistilleerd.

Een andere aanvliegroute is om fragmentatie op het niveau van de reader
locator of proxy te leggen. Zolang een reader proxy slechts een
transport gebruikt is er geen probleem. De transport is bekend en
daarmee de beperkingen, waarna kan worden besloten of moet worden
gefragmenteerd of niet. In plaats van data berichten worden niet
datafrag berichten geïnjecteerd. Het is dan echter niet meer mogelijk om
een RTPS bericht verder te optimaliseren door berichten van
verschillende bronnen samen te nemen omdat de datafrag berichten dan
zijn afgestemd op de de maximale omvang van berichten. Er is geen
on-the-fly optimalisatie.

\section{Discovery}

\subsection{The Simple Participant Discovery Protocol (SPDP)}

The SPDP process runs as a separate process directly under the participant’s supervisor at the same level as the participant controller, registry and endpoints supervisor. Each participant has its own SPDP process running, which is in line with what the specs describe1.

The general idea is, according to 8.5.3.1 General Approach, that for each local participant a writer endpoint is created which broadcasts the participant’s existence and a reader endpoint is started to listen for remote participants.
The \lstinline{rtps_spdp} module uses a single history cache to store information on the local participant within a domain and the remote participants discovered. It will start a stateless best-effort reader and writer which are associated with a local participant. These readers and writers use the GUID prefix of their local participants and the predefined entity ids \lstinline{BuiltinParticipantMessageWriter} and \lstinline{BuiltinParticipantMessageReader}.

On initialization, the SPDP will query the participant for relevant information described by the \lstinline{spdp_discovered_participant_data} (see spdp\_discovered\_participant\_data p125) which is turned into a parameter list and is stored as the serialized data of a cache change in the history cache. On a regular basis, the \lstinline{resend_period}, the \lstinline{BuiltinParticipantMessageWriter} is instructed to resend the history cache’s content by calling the writer’s \lstinline{unsent_changes_reset} function.

NOT IMPLEMENTED YET: If a change is made to the participant’s arguments which is relevant for the SPDP process, the participant must inform the SPDP process to update the cache. 

When the participant is deleted, the corresponding SPDP process is deleted as well and the announcements made by the SPDP process are stopped. The SPDP does inform remote participants that the participant is terminated by sending a keyed \lstinline{not_alive_disposed} cache change which the remote participants are supposed to detect act on. Remote participants may also conclude that a participant has disappeared after not receiving any announcements within the \lstinline{lease_duration} period.

The built-in SPDP reader listens on one or more (multicast) locators for announcements made by remote participants. These announcements, in the form of DATA messages are stored in the SPDP’s history cache and will trigger the SEDP.

On detecting a remote participant, the SPDP will start tracking the remote participant and will inform the SEDP of the newly detected remote participant. The remote participant uses a sequence number, which is incremented if the remote participant has updated its parameters, which is used by the SPDP to detect changes in the remote participant. If the last noted sequence number is less than the received sequence number, the SPDP knows a change in the remote participant’s data occurred.
So, the only action the SPDP performs on detecting a new remote participant is informing the SEDP of the existence of the new remote participant and tracking the remote participant for changes in its parameters. Changes to the parameters have no consequences (yet). It is up to the user application to make use of a remote participant information stored in the SPDP’s history cache.

\subsubsection{Notes}

The built-in endpoints used by the SPDP both have their topic kind set to \lstinline{with_key}.
The specs do not define how to the handle the ‘NOT\_ALIVE\_DISPOSED’ cache change that is used in this case. RTPS itself is not able to handle a NOT\_ALIVE\_DISPOSED or NOT\_ALIVE\_UNREGISTERED type of cache change.

\subsection{The Simple Endpoint Discovery Protocol (SEDP)}

The SEDP is a separate process running directly under the participant
supervisor which will inform interested parties which writers and
readers the local participant has started and the relevant parameters
for establishing a connection between the local and remote endpoints.
For each remote participant , detected by the SPDP, the SEDP will start
a dedicated proxy for the remote location for detecting and monitoring
what endpoints the remote side is using. The SEDP uses a single history
cache to record the local and remote endpoints.

On initialization of the SEDP, a series of built-in endpoints are
started. Which of the predefined built-in endpoints are started does
depend on the precise nature of the local participant as described in
`8.5.4.3 Built-in Endpoints required by the Simple Endpoint Discovery
Protocol (p132)', but will mostly be at least the following four
endpoints with the reliability level set to reliable:

\begin{enumerate}

\item
  \lstinline{SEDPbuiltinPublicationsWriter}: this writer is used to
  announce which local writers the local participant has started. For
  every detected remote participant, a reader-proxy is added to this
  writer to maintain reliable communication;

\item
  \lstinline{SEDPbuiltinPublicationsReader}: this reader is used to
  detect which remote writers the remote participants are using. For
  every detected remote participant, a writer-proxy is added to this
  reader;

\item
  \lstinline{SEDPbuiltinSubscriptionsWriter}: this writer announces
  which local readers the local participant has started. For every
  remote participant, a reader proxy is added;

\item
  \lstinline{SEDPbuiltinSubscriptionsReader}: this reader detects
  which remote readers are in use by remote participants. For every
  remote participant, a writer proxy is added;

\item
  \lstinline{SEDPbuiltinTopicsWriter}: optional and not used.

\item
  \lstinline{SEDPbuiltinTopicsReader}: optional and not used.

\end{enumerate}

The publications built-in endpoints are concerned with the local and
remote writers while the subscription built-in endpoints are concerned
with the local and remote readers. The endpoints not only make available
information on which writers as sources are available, but also which
readers as destinations are in use. All this is maintained as proxies
per remote participant to make communication reliable. Each SEDP now is
able to know and make known which writers are available on remote
participants and have local readers attached and which remote readers
are making use of the local writers.

NB: The built-in endpoints used by the SEDP all have their topic kind
set to \lstinline{with_key}.

As with the SPDP, it is up to the user application to do something with
remote writers and readers. The SEDP just keeps track what is available
and in use.

NB: THE SEDP WILL ONLY SEND DATA MESSAGES WHEN THERE IS AT LEAST ONE
REMOTE PARTICIPANT DETECTED. WITHOUT A REMOTE PARTICIPANT, THE SEDP WILL
BE COMPLETELY SILENT.

\subsection{SPDPdiscoveredParticipantData}

On a regular basis, the participants are queried to obtain the
\lstinline{SPDPdiscoveredParticipantData} (section 8.5.3.2) or participant data
for short.

\subsection{SPDPbuiltinParticipantWriter}

This writer is a very regular \lstinline{rtps_writer} instance. By
giving it the entity id \lstinline{BuiltinParticipantMessageWriter},
entity type (\lstinline{built_in}) and entity kind
(\lstinline{writer_with_key}) are implicitly defined, i.e. it is
unnecessary to set entity type and kind explicitly.

If no multicast or unicast locators are passed in on initialization, the
writer will use the default multicast locator as defined by the specs,
based on the domain id.

\subsection{SPDPbuiltinParticipantReader}

A default multicast reader is started by the SPDP by default and
starting a reader per participant is not required. Only if a
\lstinline{SPDPbuiltinParticipantReader} uses a different multicast or
unicast locator list, extra locators must be added to the defualt
reader.

\begin{itemize}

\item
  Currently we use a resend timer in to announce In the module
  \lstinline{rtps_spdp}. One could also add such a timer to the
  stateless best effort writer(s) and delegate the task of resending,
  which would also allow for locator / transport specific
  timings. However, this increases the number of timers in the system
  and probably the added flexibility will be hardly used.

\item
  If we would have a writer that can publish data on behalf of other
  writers, we could use a single writer in this case. The RTPS protocol
  uses the InfoSource submessage to switch from one source GUID prefix
  to another, so that may be used. Also, it is a known option to have a
  writer to (re)play cache changes with another GUID prefix. The current
  implementation of the writer closely links the writer's GUID prefix
  with the history cache changes it can fetch.

\end{itemize}

\section{QoS}

UNTESTED!

Quality of Service (QoS) is used by the RTPS protocol during discovery
and configuration to relay some parameters such as the reliability
level, locators and timing used. QoSs are key value pairs which can be
encoded either as the payload of DATA messages or as part of a seperate
'Inline QoS' field in a DATA message. In the scope of RTPS, a QoS
key-value pair is also referred to as a parameter.

Application that make use of RTPS, notably DDS, use the same RTPS QoS
mechanism for their own set of types of QoS. To make use of it, the
application translates its own representation of a QoS into a
key-value pair and RTPS will handle it as it does with its own
parameters. For example, in DDS the QoS \lstinline{history_qos} is
defined with two parameters \lstinline{kind} and \lstinline{depth},
with kind being an enumerated type and depth an integer value. It is
up to DDS to translate this structure into a binary representation and
pass it on to RTPS as a parameter with a key \lstinline{history_qos}
and the binary representation as its value.

The set of QoSs can be either communicated as the payload of a DATA
message, which is done mostly during some configuration phase, either
on the RTPS level or implemented in the application that makes use of
RTPS.  For this to work, the 'receiving' party must be able to keep
track of the 'sending' party which is implemented in a stateful
reader. In case the receiving party is not able to keep track of the
QoSs of the other party, the list of QoS must be included in every
message by using the inline qos mechanism. The RTPS specs, while
sending data and the flag \lstinline{expects_inline_qos} is set, the
QoS values are taken from the related DDS data writer and included in
the data message as inline qos. This would indicate that the value of
the QoS can change from one moment to the other. For example in
8.4.9.1.4:

\begin{lstlisting}
  IF (the_reader_proxy.expectsInlineQos) {
    DATA.inlineQos := the_rtps_writer.related_dds_writer.qos;
  }
\end{lstlisting}

The reader will collect the inline qos values if
\lstinline{expects_inline_qos} is set. The operation
\lstinline{get_qos/1} will fetch the list of QoS policies from the
reader in case the reader is operating in best effort mode or fetch
them fro the writer proxies, in which case the QoS values are merged
(undefined order).

There are a few types of QoS needed for the RTPS implementation, but
the whole QoS thing is more a DDS thingy. The RTPS specs mentions QoSs
all the time but doesn't further define what they are and how they are
used.  The module \lstinline{rtps_qos} implements the QoS internal
representation and the en- and decoding of the various types of
QoS. The \lstinline{rtps_qos} module is likely to be replaced by
something else in the dds application.


\chapter{The modules}

\section{rtps\_app}

Type:
\href{http://erlang.org/doc/apps/kernel/application.html}{application
  behavior}

This is a very minimalistic application behavior which will start the
main application supervisor. It exports the standard
\lstinline{start/2} and \lstinline{stop/1} API calls. See the
\href{http://erlang.org/doc/design_principles/applications.html}{application}
documentation for details.

\section{rtps\_sup}

Type: one-for-all
\href{http://erlang.org/doc/design_principles/sup_princ.html}{supervisor}

\section{rtps}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

Restart: permanent

This module has two roles to play as the overall controlling process in
the RTPS application and as serving as the external API.

\section{rtps\_reg}

Type:
\href{http://erlang.org/doc/design_principles/gen_server_concepts.html}{gen\_server}

This is a generic module used as a registry to keep track of a domain,
participant or endpoint and its pid. The registry is used by the main,
domain or participant controller to locate a process so it can for
example add a new endpoint to some participant. The registry
implements a key value store and some logic to manage the stored
tuples. The processes that must be registered implement the
\lstinline{via, Module, Name} pattern in their
\lstinline{start_link/3,4} calls to register there pid and
id \footnote{See the
  \href{http://erlang.org/doc/man/global.html}{global} module for an
  example implementation of a registry API calls. }. For example, the
next code will register the domain id with the pid using the
\lstinline{rtps_reg} module:

\begin{lstlisting}
start_link(Sup, Reg, Id, Opts) ->
    ...
    gen_statem:start_link({via, rtps_reg, {Reg, Id}}, ?MODULE, [...], []).
\end{lstlisting}

In this call the \lstinline{Reg} parameter holds the pid of the
domain's registry and the parameter \lstinline{Id} contains the
domain's id 0...n. Using \lstinline!{via, rtps_reg, {Reg, Id}}!, Erlang will
use the module \lstinline{rtps_reg} to register the name, which
happens to be the tuple \lstinline!{Reg, Id}!. The \lstinline{rtps_reg}
module implements the by Erlang prescribed registering protocol with
the API functions shown next:

\begin{lstlisting}
register_name({Reg, Id}, Pid) ->
    gen_server:call(Reg, {register, Id, Pid}).

unregister_name({Reg, Id}) ->
    gen_server:cast(Reg, {unregister, Id}).

whereis_name(Reg, Id) ->
    whereis_name({Reg, Id}).

whereis_name({Reg, Id}) ->
    gen_server:call(Reg, {whereis, Id}).

\end{lstlisting}

The trick here is that the name parameter actually contains both the
Id to be registered and the pid of the process which should be used as
the registry. This way, we can have a unnamed registry process for
every domain and participant to register the contained participants
and endpoints respectively. Using the standard \lstinline{via} method
instead of some tailor made mechanism is massively more reliable,
robust and easier to do and takes automatically care of the
appropriate calls when creating, deleting and restarting processes.

The \lstinline{Reg} parameter is determined by the controlling
processes directly after they are initialized by making a call to the
supervisor to retrieve the pid of the registry started by the
supervisor. The registry's pid can be considered a constant because in
case of a restart of a domain, participant etc, the one-for-all
supervisor which is in control will restart both the controller and
registry.

The key value store currently is implemented as a list of tuples which
is fine for a small number of tuples to store, which probably is the
case for this application. The number of domains and participants tends
to be small and the number of endpoints wont be large either. Changing
the lists to something different is trivial and would be called for when
the number of elements in the list would be larger than say 100 and the
list is updated frequently, which probably isn't the case anyhow.

Some notes:

\begin{itemize}
\item
  Using the supervisor itself to find the pids belonging to ids would be
  another possibility but is not supported directly by the supervisor
  and may also lead to deadlock situations easily. Keeping a list within
  the controller itself was considered but adds complexity to the
  controllers. Using \lstinline{via} just makes things very easy and works under
  all circumstances.
\end{itemize}

\section{rtps\_domain\_sup}

Type: one-for-all
\href{http://erlang.org/doc/design_principles/sup_princ.html}{supervisor}

\section{rtps\_domain}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The \lstinline{rtps_domain} process is the process which will use the
\lstinline{start_link call} with the \lstinline{via} construct to
register itself with the domain registry: i.e. not the domain
supervisor is registered but the domain controller is. This is obvious
if you realize that it is the controller manages all that is happening
within a domain and the supervisor is not.

Because participants are assigned a hidden unique positive integer based
id within the domain on a node automatically when started, the domain
controller keeps the last assigned participant id in its state and a
list with already assigned ids. The ids mentioned here are NOT the ids
used as part of the GUIDs that are used for addressing endpoints but are
used to calculate which port numbers are going to be used for
transports. Ids from deleted participants can be reused which is why a
list is kept of assigned ids.

\section{rtps\_participant\_sup}

Type: one-for-all
\href{http://erlang.org/doc/design_principles/sup_princ.html}{supervisor}

\section{rtps\_participant}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

It is the user application which determines the participant's
GUID-prefix just like the domain id to be used.

Some remarks:

\begin{itemize}
\item
  If the list with assigned participant ids equals the maximum number of
  participants allowed within a domain, we get an error. We check for
  this condition for clarity and to simplify the assignment of new
  participant ids later on. In most cases the number of participants is
  small, likely only one, and adding or removing participants occurs
  rarely, making a list an appropriate data structure;
\item
  Participant ids are assigned by the function \lstinline{participant_id}
  incrementally up to the maximum id allowed after which the function
  will try to assign ids starting from 0 again looking for unused ids.
  There is no good reason to do it this way or use some other mechanism;
\item
  The \lstinline{Reg} variable refers to the domain's registry which keeps track
  of the participants within the domain and which is used by the
  \lstinline{rtps_participant} controller process to register itself;
\item
  The \lstinline{Trans} variable refers to the domain's transport supervisor
  passed on to endpoints later on to get hold off a transport;
\item
  Options are passed on as property lists, with process adding options
  as needed In front of the list making such options supersede already
  defined instances the option. 
\end{itemize}

\section{Endpoints}

Each and every endpoint within a domain has a unique id, their GUIDs,
with endpoints belonging to the same participant sharing a prefix of the
id, the GUID-prefix. The GUID is user defined. Basically, a user
application will create a participant with a particular GUID-prefix and
add endpoints to the participant with a id which completes the GUIDs
used. The endpoint ids are called \lstinline{entity ids} and must follow the
following schema:

\begin{enumerate}
\item
  the entity id contains a single octet length \lstinline{kind}
  field\footnote{See Table 9.1 - entityKind octet of an EntityId\_t of
    the RTPS specifications}, which determines whether the endpoint is
  built-in or user defined endpoint, it is a writer or reader and if
  the cache changes use keys or not. These values are predefined but
  also leave room for user defined values;
\item
  a three octet long \lstinline{key} which is used to make the
  \lstinline{entity id} unique: i.e. every endpoint within a
  participant must be unique and the key filed is used to
  differentiate between the same kind of endpoints.
\end{enumerate}

Endpoints are associated with a single history cache. In case the
endpoint is a writer, the history cache contains the cache changes that
are to be published by the endpoint. In case the endpoint is a reader,
the history cache contains all the received cache changes from one or
more different writers, with the GUID of the originating writer
associated with each cache change in the reader's history cache.

Endpoints, either writers or readers, can support different levels of
reliability and can be stateless or stateful. A reliable endpoint is
able to take action in case an problem is detected during communication
and can request a retransmission of a cache change. The stateless
endpoints have no knowledge of what other endpoints are involved while
the stateful endpoints keep track of the remote endpoints involved.
Endpoints with different levels of reliability and state can be mixed up
to a certain extend as described in the specifications\footnote{See
  Table 8.44 - Possible combinations of attributes for a matched RTPS
  Writer and RTPS Reader}.


\section{rtps\_writer}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The module \lstinline{rtps_writer} implements the writer endpoint as a state
machine with the states \lstinline{stateless' and }stateful', but without the
possibility to switch state after being started. Instead of implementing
the stateless and stateful writer as separate modules, state is used to
implement the different behaviors since much of the functionality is
similar for the two types of writers. The state also defines the API
calls that the writer will act upon.

A stateless writer uses one or more \lstinline{locators} to publish
cache changes.  In this implementation, locaotrs are for example
UDP/IPv4 multi- or unicast addresses. For each locator used by the
writer a so called 'reader locator' is used.

A stateful writer keeps track of the readers which subscribe to the
cache changes published. For each reader, the writer uses a so called
\lstinline{reader proxy}. The reader proxy may use one or more locators to
communicate with the reader.

Because every cache change must be assigned a unique serial number for
readers to keep track of the cache changes received, including
reordering and detecting missing received cache changes, the writer is
also responsible to give out these serial numbers. For each cache change
a user application wants to write to the history cache, it first must
obtain a new serial number from the writer before writing to the cache.

The specs say nothing about how a reader locator or reader proxy is
informed of the availability of unsent changes. This implementation
currently uses a operation add\_change implemented by the writer which
will add a change to the history cache and will call all locators or
proxies to inform them of the added cache change. One could consider
implementing an event manager which is informed by the history cache of
the fact changes are added and which may call event handlers for the
associated locators and proxies, but such a scheme seems a little to
complex for now.

Some more remarks:

\begin{itemize}
\item
  In case the history cache contains cache changes on initialization of
  the writer, the next sequence number given out is determined by the
  highest serial number in the cache for the writer's GUID. The history
  cache can therefor be non-volatile;
\item
  Even though the above functionality is implemented, a new cache is
  currently crated on initialization.
\end{itemize}

\section{rtps\_reader\_locator}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The stateless writer uses the \lstinline{rtps_reader_locator} for each
transport locator. A reader locator can be operated in either a
best-effort or reliable mode.

In the best-effort mode, the reader locator will take a list of cache
changes supplied by the writer and just publish these changes. It is the
writer which tells the reader locator when it is time to (re)send the
data after which the best-effort reader locator will take all cache
changes from the history cache and push them out to the transport.

In the reliable mode, the reader locator will also listen on the locator
for requests from remote readers. A remote reader, it doesn't matter
which one, can broadcast a \lstinline{repair} request\footnote{Such a repair
  request is implemented by the ACKNACK RTPS submessage.} which contains
the serial numbers it would like to receive. On receiving these repair
requests, the reader locaotr will look for and resend the requested
cache changes. If the requested cache changes are no longer available in
the writer's history cache, the remote reader is informed about that
situation using a RTPS housekeeping message.

In contrast to the state diagrams from the RTPS specifications, cache
changes are pushed to the transport all together using one single call
to the transport and not one at a time depending on the availability of
the transport. For the transport to be able to group cache changes in
RTPS messages for optimization, it requires a bunch of cache changes to
select from or it otherwise would probably process them directly one
after another. Also, sending a message to the transport for every single
cache change does involve considerable resources even when sending
Erlang messages is considered costless. Last but not least, the reader
locator will only inform the transport about which cache change should
be published and will leave it to the transport to actually fetch the
cache change form the history cache, serialize them and maybe even split
large cache changes into parts if the transport can't handle their size.
The transport, and not the reader locator, knows about the peculiarities
of the network protocol used and how to optimize communication.

The reader locator implementation is implemented as a single module
using the gen\_statem behavior for both reliable and best-effort
modes, which are part of the modules state, because there is much
common functionality. This is much like the stateless and stateful
states used in the writer implementation. However, the reliable mode
reader locator also requires two states, the \lstinline{normal} and a
\lstinline{repair} state. Actually, the repair state uses a
\lstinline{waiting} and \lstinline{must_repair} state while the normal
state is not explicitly used. To implement all this in a single state
machine, the \lstinline{rtps_reader_locator} is implemented as a
gen\_statem behavior using \lstinline{handle_event} functions and a
complex \lstinline{State}. The state uses the following definition:

\begin{lstlisting}
  -record(state, {reliability_level :: best_effort | reliable,
    % state1 :: idle | announcing, % | pushing,}
    state2 :: undefined | waiting | must_repair}).% repairing
\end{lstlisting}

The state is defined as a record with room for the reliability level and
a state parameter used for handling repair requests.

In best-effort mode, the reader locator is either idle or pushing
according to the specifications, but since in this implementation all
available changes are pushed using a single call to the transport, the
pushing state is not needed. On initialization of the best-effort
reader locator, the transport is initialized using
\lstinline{undefined} as the remote guid instead of the value
\lstinline{unknown} as might be expected and used in several other
situations. If \lstinline{unknown} was used instead of
\lstinline{undefined} as the remote guid, the RTPS receiver will
consider this endpoint as a candidate for incoming messages, which is
not correct since the best effort reader locator never ever will
process any incoming messages.

When operating in reliable mode, the \lstinline{normal} state consists
of sub-states \lstinline{announcing} and \lstinline{pushing} but as
before the pushing sub-state is nor used and therefor the whole normal
state can be ignored. The difference between idle in case of
best-effort and announcing in reliable mode will be explained later.

For the \lstinline{repair} state for the reliable reader locator, a
sub-state is required indicating that there are no requests to be
handle for repairs (\lstinline{waiting}), that a request for repair is
received but the reader locator should wait a bit in case another
repair request is received before resending and sending the
repair. Because sending is done, as in the case of pushing, using a
single call to the transport, the sending sub-state is not used.

Using the complex state in the \lstinline{handle_event} functions allows the
combination of the sub-states to be handled in a rather nice and clean
manner.

Other remarks:

\begin{itemize}
\item
  The difference between \lstinline{idle} and \lstinline{announcing}
  is that in reliable mode a reader locator can simply announce which
  cache changes are available within the writer's history cache and
  leave it to remote readers to ask for the cache changes they do
  actually need, using a repair request. In best-effort mode, the
  reader locator will send the cache changes when instructed by the
  writer and become idle again;
\item
  The \lstinline{heartbeat} timer runs local in each reliable reader
  locator: i.e.  reader locators can use different heartbeat periods
  if needed;
\item
  New data samples are send immediately (or not in case reliable and
  push mode is \lstinline{false}.) Requesting data samples adds the
  requests to a list, merging them with other requests, and the reader
  locator will try to fetch all requested data samples form the
  history cache and send them;
\item
  Calling the transport is done with the call
  \lstinline{rtps_sender:send/4} which is a part of the transport
  which is responsible for collecting things to be send and combining
  them into RTPS messages before the actual sending on the network as
  will be explained in a separate section below;
\item
  The destination for cache changes send by a reader locator always is
  READER\_GUID\_UNKNOWN, even when they are send in response of a remote
  reader's repair request. By definition, a reader locator has no
  knowledge of the remote readers;
\item
  While processing the ACKNACK submessage, it is being checked that the
  reader locator is within the reply\_to list from the receiver. If not,
  the ACKNACK is ignored.
\end{itemize}

\section{rtps\_reader\_proxy}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The stateful writer uses the \lstinline{ rtps_reader_proxy} for each remote
reader it talks, with each reader proxy representing the knowledge the
writer has about that reader. The reader proxy can either use the
best-effort or reliable reliability level. On its turn, the reader proxy
uses the transport for the actual publishing of cache changes.

In both best-effort and reliable mode, the reader proxy keeps track
which cache changes were sent and which are not. Also, the reader proxy
can apply a filter on the cache changes on behalf of the remote reader
to determine which cache changes to send or not. Applying a filter
before sending can have a huge impact on network utilization and the
remote reader's performance.

The best-effort reader proxy will accept new cache changes for the
remote reader, apply the filter if any, send them off and mark the cache
change as being processed and will not resend it again. Because of it's
best-effort nature, there is no way for the remote reader to indicate
that it missed a cache change and ask for a repair by the reader proxy.
Much like the best-effort reader locator, the best-effort reader proxy
will publish the cache changes and forget about it, but with the added
functionality of filtering.

In reliable mode, the reader proxy is able to process repair requests
from the remote reader. The reader proxy now will send the cache changes
to the remote reader, after filtering on behalf of the reader if
applicable, and mark these cache changes as send but still
unacknowledged. It will wait for the remote reader to acknowledge the
receipt of the cache changes, marking the cache changes as sent by
removing them from its internal state As long as the remote reader
hasn't acknowledged the receipt of cache changes, it will periodically
remind the remote reader about those cache changes. The remote remote
reader should inform the reader proxy about the receipt of cache changes
and can request the proxy to resend the cache changes it is still
missing.

The reader proxy must keep track of individual cache changes for a
reader which are referred to as \lstinline{ changes for reader}
(\lstinline{ cfr}s) which besides the cache change also hold a
status. Because the reader proxy in best-effort mode will not bother
trying resend cache changes, the use of the cfr's status is only used
for the reliable reader proxy. The changes for reader have the
following representation:

\begin{lstlisting}
-record(cfr, {sequence_number :: sequence_number(),
              % Status new and acknowledged are not used.
              status :: unsent | requested | unacknowledged | underway,
              is_relevant :: boolean(), timestamp :: integer()}).
\end{lstlisting}

The statuses \lstinline{ new} and \lstinline{ acknowledged} are
implicit by the existence or non existence of the cfr
respectively. Cache changes with status \lstinline{ unsent} are
published in the normal process and become the status 
\lstinline{underway} and are timestamped. When the remote reader acknowledges
the cache change, the cache change becomes acknowledged and is removed
from the list of cfrs. If a cache change which has status 
\lstinline{underway} is not acknowledged within a certain period, its status
will change to \lstinline{ unacknowledged} allowing it to be
considered for repair processing.  When a remote reader requests the
resending of the cache change, it's status is changed accordingly
which will select the cache change for retransmission after it will be
changes to the state underway again with a new timestamp set.

The \lstinline{ is_relevant} field indicates if a cache change is
relevant for the remote reader which is the outcome of the filter
process which is applied on adding new cache changes to the list of
cfrs. That is, the filter is applied when a new cache change is made
available to the reader proxy and is not executed again when sending
or repairing cfrs.

One could argue that the filter should be applied repeatedly if the
filter uses some time criterium: i.e. only send cache changes which are
to be considered current. If no such criteria is used, this would lead
to the filter producing the same outcome all the time and a waste of
resources. This may be an optimization issue to look into later.

The remote reader is informed about all cache changes which are marked
as not relevant, using a RTPS housekeeping message using a highly
compacted format and it should respond to the proxy with an
acknowledgment just the same way as it does with regular cache changes.

The cfrs are kept in a list structure because it is unlikely their
number will grow very large, which may be a wrong assumption.

As with the \lstinline{rtps_reader_locator}, the 
\lstinline{rtps_reader_proxy} is implemented using handle\_event functions and
a complex state.

\subsection{Locators and reply locator}

A reader proxy on initialization takes as arguments a list of unicast
and multicast locators according to sections 8.4.9.1.1 and 8.4.9.2.1.
From the fact that two lists are used, we conclude that the proxy should
use all the locators from these lists as transports, i.e. a proxy has
several routes to reach the remote reader. Sending cache change
messages, heartbeats etc, should therefor be done using all transports.

Received messages from the \lstinline{ rtps_receiver} hold a list of
reply locators. The reply locators normally will contain the address,
port and transport type of the transport that received the message but
can be different if one of the submessages is a info-reply
submessage. The reply locator from the info-reply submessage can be
any locator without any restrictions which introduces some issues: if
the reply locator is not already open and associated with the reader
proxy, the reader proxy should open the reply locator which can a)
introduce many uncontrollable transports being opened (and probably
closed) and b) the risk that transports are used that should not be
used at all.

On receiving an ACKNACK, the \lstinline{ reply_to} list is stored in
the reader proxy's state for later use when the \lstinline{nack_response_delay}
 expires.  While handling the \lstinline{nack_response_delay} timeout, only the locators from the \lstinline{reply_locs} are used if they are contained in the locator list of
the proxy: i.e. if the reply to locator is not one of the locators
used by the proxy, nothing is send using that reply locator.

\subsection{Some more remarks}

\begin{itemize}
\item
  The list of cfrs is reprocessed every time a \lstinline{ acknack}
  message is received by the reader proxy. Only such a message will
  make relevant status changes. The \lstinline{ acknack} message first
  of all informs the reader proxy which cache changes are acknowledged
  and therefor can be dropped from the list of cfrs. The message may
  also contain a list of cache changes that should be resend;
\item
  While processing the \lstinline{ acknack} message, cache changes
  which are marked as being \lstinline{ underway} are moved to the
  \lstinline{ unacknowledged} state if their timestamp is older than
  the current time minus a preset delay;
\item
  While announcing which cache changes are available for the remote
  reader, the reader proxy uses its knowledge about which cache
  changes the remote reader has received. If there are no more cache
  changes the remote reader is missing, the reader proxy will turn
  silent until there is a new cache change available from the
  writer. The heartbeat does not use the history caches minimum and
  maximum sequence numbers but uses it's own internal list of cfrs for
  the \lstinline{ First_sn} and \lstinline{ Last_sn}.
\end{itemize}

The filter is being called for every added cache change but the
implementation requires the implementation of a domain specific
language.

\section{rtps\_reader}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The reader is also implemented as a state machine with the states
\lstinline{ stateless} and \lstinline{ stateful}. The stateless reader
receives cache changes by registering with a transport using a
particular locator and simply stuffing these cache changes in the
associated history cache. A stateless reader can only operate as a
\lstinline{best-effort} reader because it doesn't keep track of the
writers, note the plural used here, which publish on the locator.

The stateful reader is a bit more involved since a proxy is used for
each remote writer. For each remote writer, the stateful reader will
start a new so called 'reader proxy' process. The writer proxies are
directly started by the reader and linked with it, i.e. there is no
supervisor used here.

Some more remarks:

\begin{itemize}
\item
  The stateless reader attaches to the transport locators as defined in
  the list of uni- and multicast locators passed in during
  initialization. There are no API calls available to make changes to
  the locators used which is according to the specifications, but which
  is a little weird because there seems no good reason not to make
  changes to the locators used by the reader;
\item
  The stateless reader handles the receipt of cache changes and storage
  in the history cache directly while for the stateful reader this is
  handled in a separate process for each writer proxy;
\item
  Cache changes are stored in the history cache together with the
  writer's GUID. A reader may receive cache changes from more than one
  writer.
\end{itemize}

\section{rtps\_writer\_proxy}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

Much like rhe previously described reader proxy, the writer proxy can
run in best-effort and reliable mode. Because the writer proxy is
associated with a particular remote writer, identified with the
remote's writer GUID, this set-up of a reader plus a writer proxy is
useful for following a particular writer. The module is implemented as
a gen\_statem behavior using \lstinline{ handle_event} functions and a
complex state variable. The state has the following structure:

\begin{lstlisting}
-record(state, {reliability_level :: reliable | best_effort,
                state :: waiting | may_send_ack | must_send_ack}).
\end{lstlisting}

In best-effort mode, the writer proxy keeps track of the last cache
change sequence number received and will only accept cache changes with
higher sequence numbers. If the associated history cache on start-up
already contains cache changes for the writer's GUID, the next cache
change that will be accepted is taken from the cache.

In reliable mode, the writer proxy keeps rack of which cache changes
the remote writer has available in its history cache, which cache
changes have been received and which are still missing. In this mode,
the writer proxy will actively request the resending of cache changes
by the writer when it finds it is missing cache changes. The cache
changes the writer proxy is concerned with are referred to as
\lstinline{ changes form writer} (\lstinline{cfw}s). For a reliable
writer proxy to be able to operate, the remote writer must also
operate in reliable mode because the proxy needs some information and
functionality only reliable writers support. The cache changes
involved have the following statuses:

\begin{enumerate}
\item
  Lost: if the cache change doesn't exist in the writer's history cache
  anymore, the cache change is considered to be lost. All cache changes
  with a sequence number less or equal to the last lost cache change are
  considered lost and will not be considered anymore;
\item
  Unknown: cache changes which are not yet available in the writer's
  history cache are called \lstinline{unknown}. All cache changes with
  sequence number higher than the last \lstinline{known} cache change
  have status unknown;
\item
  \lstinline{Known} can be either status \lstinline{missing},
  \lstinline{received} or \lstinline{requested};
\item
  All cache changes between the last lost and the first unknown cache
  changes are either missing, received, or requested;
\item
  The continuous set of received cache changes directly starting after
  the last lost cache change is the \lstinline{available} set of cache changes
  that can be used by the user application. This set of available cache
  changes is what the user application sees as being available and which
  it gets access to;
\item
  \lstinline{Missing} cache changes become the status
  \lstinline{requested} when the writer proxy informs the remote
  writer about the missing cache changes.
\end{enumerate}

To keep track of all individual cache changes would require a lot of
resources and be not very efficient. It is therefor that we use some
sort of sliding window which reflects the statuses of the changes for
writer:

Some more remarks:

\begin{itemize}
\item
  Only relevant changes are stored. It is unknown how user applications
  are supposed to react to this condition. Non relevance is determined
  by the writer as a result of filtering and communicated to the writer
  proxy using GAP messages;
\end{itemize}

\section{rtps\_history\_cache}

Type:
\href{http://erlang.org/doc/design_principles/gen_server_concepts.html}{gen\_server}

A user application which wants to publish data writes its data as
cache changes in the history cache of the writer. The RTPS protocol is
used to get a copy of the writer's history cache at the reader's
history cache(s) with minimum delay and as reliable as needed.

A history cache is a storage area which contains cache changes
belonging to a writer and each cache change having a unique sequence
number. A history cache may contain cache changes for more than one
writer, mostly a single writer on the publishing side and maybe for
more than one remote writers on the receiving side. The key for every
cache change in a history cache is the writer's GUID plus the cache
change sequence number. As described earlier, sequence numbers are
generated by the \lstinline{rtps_writer} on request of a user
application (because there is only one writer instance per writer
GUID.).

For each writer GUID, the history cache maintains a \lstinline{ets}
ordered set table with the sequence number as the key. Tables are
created and deleted as needed: storing a cache change with a
previously unknown writer GUID will result in the creation of a new
table, while removing the last cache change from a table will lead to
removing that table.

Because sequence numbers are ordered, the ordered set is used. Also, all
cache changes must be processed in the sequence number order throughout
the whole RTPS protocol, making the ordered set the natural choice for
the ets table type.

Adding a cache change with a sequence number already available in the
table will leave the existing cache change unchanged and return an
already present error. For reasons of efficiency and in contrast with
the specifications, the history cache supports adding and deleting a
list of cache changes within a single API call. For each writer GUID,
the first and last sequence number of the stored cache changes can be
requested.

More remarks:

\begin{itemize}
\item
  Currently, history caches are created by the writer or reader
  associated with it. One could just as well make the creation and
  deletion of the history caches independent from the writers and
  readers and put them under the control of a history cache supervisor;
\item
  Ets tables of course are the simplest storage type that can be used.
  One could also use \lstinline{dets} tables in case the cache should be
  non-volatile, use \lstinline{mnesia} for more advanced setups or use some
  external key-value storage type. Do remember, this is a middleware
  implementation and not some advanced data storage solution, which is
  more the task of the user application making use of RTPS.
\end{itemize}

\section{Transport}

The next modules \lstinline{rtps_transport}, \lstinline{rtps_receiver}
and \lstinline{rtps_sender} make up a single transport. A transport is
associated with a \lstinline{locator} which can be an UDP/Ipv4/6
address and port number but also could be another type of (network)
socket, Unix pipe or some shared file. This implementation currently
only supports UDP over IPv4 and IPv6. TCP, pipes and most other
mechanisms are already reliable forms of transport and there is no
need to use RTPS for these situations.

A transport can be shared by several endpoints within a domain and
therefor are grouped under the control of a supervisor within a domain.

There are two related modules \lstinline{rtps_submsg} which is used for
manipulating the data structures related to the internal representation
and manipulation of RTPS messages and \lstinline{rtps_psm} for encoding and
decoding the internal representations into and from the external RTPS
messages wire format.

\subsection{rtps\_transport\_sup}

Type: one-for-one
\href{http://erlang.org/doc/design_principles/sup_princ.html}{supervisor}

For each domain, a one-for-one supervisor is started for the transports.
A simple-one-for-one supervisor could have been used, but since
transports of different types may be supported in some future version of
the implementation, the one-for-one strategy is used.

This module has a API function to add a new transport, which is called
from an endpoint which needs a transport. Adding a transport results
in starting a new \lstinline{rtps_transport} process, but if a child
is started for an already used locator, the supervisor will detect
this and use the already running process. On starting a transport or
picking up an already running transport, a transport \lstinline{open}
call is made to the transport to attach the endpoint to the transport,
which is described in more detail later on.

Transports are linked with the endpoints. A failing transport will take
the endpoint with it and vice versa. The transport supervisor uses the
temporary restart strategy for the transports because the sender and
receiver associated with a transport must be associated with endpoints
and the supervisor is not able to restart the transport in a known state
in case of failure.

\subsection{rtps\_transport}

Type:
\href{http://erlang.org/doc/design_principles/gen_server_concepts.html}{gen\_server}

Every transport creates a UDP socket. Currently this is a bit
simplified and all UDP sockets are created as unicast or multicast UDP
sockets depending on the IP address used.

In case another RTPS instance on the same network host should be able
to communicate with this instance or for testing purposes, the
\lstinline{multicast_loop} option should be set to \lstinline{true},
which also makes debugging easier but will take considerable more
resources.

Next, a sender process is started and linked with this transport and
the same is done for a receiver process. See below. The sender process
will take outgoing sub-messages from endpoints and combine them into
and optimize outgoing messages and the \lstinline{rtps_transport}
process will do the actual sending . The receiver will take an RTPS
messages received by the \lstinline{rtps_transport} prcess, take the
message apart into its sub-messages and distribute these sub-messages
to the addressed endpoints.

Using separate processes for the actual transport, sender and receiver
has the advantage that the implementation of the modules is kept
simple and that these processes can run in parallel.

On receipt of a RTPS message by the transport, first the header of the
message is inspected to check if it is a RTPS message and if it has
the proper version. After accepting the message, it is
transformed\footnote{One could consider moving this step to the
  \lstinline{rtps_receiver} process to free the
  \lstinline{rtps_transport} for sending as quick as possible.} into
its internal representation using the \lstinline{rtps_psm} module and
send to the receiver process which will process the message after it
has finished the previous one, which fits with the specification's
precondition that RTPS messages must be processed in the order of
arrival.

At the time the sender is created by the transport, the transport
passes it's own pid as a parameter to the sender process which will
use this pid for the outgoing RTPS message. Every time the transport
gets ready to take a new RTPS message from the sender, it will inform
the sender by calling the rtps\_sender's \lstinline{can_send}
function. The sender therefor will get 'back-pressure' from the
transport. However, with the operating system's buffers for UDP and
depending on the implementation of UDP in the OS, it is hard to tell
under which conditions back-pressure kicks in. The buffers may be
large enough to accept all outgoing messages and because UDP is an
unreliable transport, the OS may even decide to drop outgoing messages
without informing the producer. This last item is an assumption and
may not be true!

Every time an endpoint attaches to a transport, using the
\lstinline{open} function call, the endpoint is also added to the list
of associated endpoints. This list of endpoints is dynamic and needed
by the receiver process to know which endpoints may be the target for
a sub-message. The most current list of endpoints is passed on to the
receiver on arrival of a new RTPS message. If the list of endpoints
gets empty because all endpoints it is associated with are terminated,
the transport will also terminate. The next endpoint that will need
the transport again will restart the transport from the supervisor.

\subsection{rtps\_receiver}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

This module implements the \lstinline{receiver} as defined in the RTPS
specifications. It uses the gen\_statem behavior with two state
functions: \lstinline{idle} and \lstinline{receiving}.

In the state idle, a RTPS message is accepted in its internal record
based form which is a series of sub-messages, plus some extra
information needed by the receiver, at which point the receiver is
initialized to a defined state. The next state will now be
\lstinline{receiving} plus an internal event with the sub-messages to
trigger the receiving process, i.e. the \lstinline{idle} state
function uses the following state function result:

\begin{lstlisting}
  {{reply, From, ok}, {next_event, internal, Submsgs}}.
\end{lstlisting}

with the variable \lstinline{Submsgs} containing the sub-messages
included within the RTPS message.

In the receiving state, the state machine will only process internal
events taking one sub-message after the other until the list of
sub-messages is exhausted after which the next state will be
\lstinline{idle} again. The \lstinline{receiving} state function has
clauses for each type of sub-message and uses the internal state for
processing each sub-message according to the specifications. For
example, the sub-message \lstinline{InfoDestination} may changes which
endpoint to use as a destination for subsequent sub-messages. The
internal state is used to take information from housekeeping
sub-messages and add that information to subsequent data carrying
sub-messages.

Some more remarks:

\begin{itemize}
\item
  The sub-messages are send to the endpoints using the Erlang
  \lstinline{!} send construct. The \lstinline{!} is used because
  endpoints are implemented using different modules and can even be
  different types of behaviors, making it impossible to use an API
  function call.
\end{itemize}

\subsection{rtps\_sender}

Type:
\href{http://erlang.org/doc/design_principles/statem.html}{gen\_statem}

The \lstinline{rtps_sender} is the logical equivalent of the receiver
at the sending side: the sender takes the list of sub-messages from
the different endpoints, adds housekeeping information based on the
type and order of the sub-messages and assembles the RTPS messages
according to the type of transport used. While assembling the
sub-messages into the final messages, taking into account the
properties of the transport, the \lstinline{rtps_sender} can try to
optimize the order of the sub-messages. For example, if the MTU of the
transport restricts the length of the UDP message to 1500 bytes, the
sender may try to arrange sub-messages to use the full MTU size or,
sub-messages for a particular destination may be taken together as
much as possible to reduce the number of housekeeping sub-messages to
be inserted and ease the processing burden on the receiving network
node.
	
The sender is implemented as a state machine using state
functions and state enter functionality.

On initialization, the sender is \lstinline{idle} and sits
waiting for a data sub-message or list of sub-messages from an
endpoint, after which it will take \lstinline{collecting} as
its next state.

On entering the \lstinline{collecting} state, the internal state of the state
machine is initialized to a known state which is very much the
equivalent of the receiver's initial state. The list of data
sub-messages is now processed, adding to the internal state such
information as the destination and source which will be added to the
RTPS message later as housekeeping sub-messages. Currently, messages are
assembled taking the data sub-messages from the endpoints in the order
as they arrive at the sender and stuffing them into an RTPS message
until either there are no longer sub-messages to add or a size limit is
to be exceeded. This is the simplest way to assemble messages.

While processing the list of data sub-messages, for each sub-message to
be added to the final message it is determined if that sub-message would
cause the maximum message size to be exceeded. If so, the current
message is finalized first and queued for delivery to the transport. On
finalizing a RTPS message, the state machine 're-enters' the
\lstinline{collecting} state by using the \lstinline{repeat_state} state function return
construct to re-initialize the internal state.

For both the idle and collecting state, data sub-messages from endpoints
are added to the list of sub-messages being processed. Before adding
them to the list of sub-messages to be processed, the sub-messages are
first encoded using a call to the \lstinline{rtps_psm} module turning the
internal representation to the external wire representation. The reason
for doing this at this point is that the exact size of the external
sub-message is needed for assembling the final RTPS message.

The sender queues the RTPS messages, actually there internal
representation, for delivery to the transport. The transport informs the
sender it is free to accept the next message by calling the sender's
\lstinline{can_send} API function, after which the sender takes the next message
from the list of messages etc. When the list of messages becomes empty,
the sender returns to the idle state. A call of the \lstinline{can_send} function
when idle will postpone that event for processing when the state of the
sender becomes collecting again.

\subsubsection{Injecting Info* submessaegs}

The sender inserts the following Info* submessages:

\begin{itemize}
	\item
	InfoSource modifies the logical source of the Submessages that
	follow.;
	\item
	InfoDestination modifies the GuidPrefix used to interpret the Reader
	entityIds appearing in the Submessages that follow it;
	\item
	InfoReply contains explicit information on where to send a reply to
	the Submessages that follow it within the same message.
\end{itemize}

While inserting these submessages one has to take into account:

\begin{enumerate}
	\item
	Only insert such a submessage when after insertion there is enough
	room left in the RTPS message to insert the submessage the Info*
	submessage was intended for: i.e. only insert an InfoReply when the
	ACKNACK can also be sent;
\end{enumerate}

\subsubsection{More remarks}

\begin{itemize}
	\item
	Finished messages to be delivered to the transport are queued using a
	list structure and not a queue. This is because it is still unclear
	how queuing currently behaves and a true queue would help. Minor issue
	to add a Erlang queue here;
	\item
	More advanced optimization strategies should be explored;
	\item
	Must review the handling of internal messages while collecting and
	arrival of new data sub-messages from endpoints as noted before.
\end{itemize}

\section{rtps\_submsg}

This module is used to create the internal representation of
sub-messages such as data, gap, heartbeat and acknack sub-messages. The
main goal of the module is to fill-in the records correctly, do some
formatting etc.

\section{rtps\_psm}

The \lstinline{rtps_psm} module implements most of the functionality from chapter
9 of the specs, Platform Specific Model. This module encodes and decodes
RTPS messages and sub-messages contained. This module translates the
internal platform independent representation into the platform specific
representation and vice versa.

The module supports both big-endian and little-endian for both en- and
decoding. Because it is of little use to switch endianness for encoding,
this is set currently while compiling but can become a parameter as
well. Also, it may be an idea to make endianness dependent on the CPU
architecture the application runs on. Endianness for encoding currently
is set to big by default because most network protocols use big endian.

Other remarks:

\begin{itemize}
	\item
	Somehow the \lstinline{rtps_psm} module should obtain the maximum message size
	from the transport, probably through the sender which needs similar
	information as well.
\end{itemize}

\section{Testing}

Running tests will cause network traffic and may interfere with the
operation of regular RTPS related applications. Please keep this in
mind! One maybe should only run tests in a dedicated and isolated
network segment.

Common tests are used to run tests in which more than one module is
involved and for tests which require a networked setup.

\href{https://www.wireshark.org/}{Wireshark} is used to inspect the
actual network traffic. The RTPS protocol is supported by Wireshark,
even though it makes some assumptions on the interpretation of GUID
which are vendor specific and not strictly according to the specs.

\section{TODOs and reminders}

General thoughts about the specs:

\begin{itemize}
	\item
	Incomplete: at some points, for example the discovery modules and
	fragmentation, the specs are missing sufficient detail to be properly
	implemented;
	\item
	Ambiguous, probably as a consequence of bolted-on functionality
	(fragmentation), left-over functionality from previous versions and
	not reviewing the specifications from the ground up;
	\item
	Complexity due to never implemented and forgotten features. Take for
	example the InfoReply submessage whose purpose is clear but how it
	fits within the protocol is not further defined. Or even worse, the
	writer liveliness protocol which is not relevant for RTPS;
\end{itemize}

This is a list of things still open.

\begin{itemize}
	\item
	The application internally uses the Erlang system time with the
	nanosecond unit time. So, it does not use the seconds / fractions as
	defined in the specs internally but only for the messages.
	\item
	The topic kind (no\_key or with\_key) are set as part of the
	configuration of endpoints. This value is not communicated as part of
	the messages exchanged. So, on discovery, it becomes clear that an
	endpoint has a particular topic kind and the endpoint is configured
	with that topic kind. On sending and receiving data, the topic kind is
	not part of the message but inherently known from the configuration of
	the endpoint. Topic kind is only used by the rtps writer to determine
	if unregistering or disposing of a data value must be communicated
	explicitly by a separate cache change.\\
	However, 8,4,4 states that ``\emph{The setting of the topicKind
		attribute in the RTPS Writer and Reader. This controls whether the
		data being communicated corresponds to a DDS Topic for which a Key has
		been defined.}'' but this also only refers to configuration and not
	the behavior itself.
	\item
	Serialization still has to be determined. Use CORBA related stuff,
	Protocol buffers, ASN1 or whatever and also determine exactly where it
	is done. 
	\item
	Endianess: Encoding is done using big-endian as a default. Is this a
	proper default? How about making the default system dependent: i.e. if
	you run RTPS on a big-endian processor, make encoding use the
	big-endian convention. However, as far as I know, network protocols
	mostly default to big endian, making big endian a good default.
	\item
	In the reader\_locator, Not only merge requested changes with the list
	of previously requested changes, but also look at the list with unsent
	changes. A requested change may already be in the list with unsent
	changes and can be ignored at this point since the change will be send
	anyhow.
	\item
	Push mode is set to true by default. Is this ok?
	\item
	In the case of reader\_proxies, a cache change is processed by one or
	more proxies which will potentially produce one or more submessages
	which will be processed by the rtps\_sender. The problem is that one
	single change is, because processed by more than one reader proxy,
	converted in two or more identical submessages (depending on the DDS
	filter) with different destination GUIDs but identical for the rest.
	Messages like these should probably be aggregated into a single
	submessage with destination GUID unknown, reducing bandwidth. Making
	the rtps\_sender detect and aggregate these messages is not the proper
	way to handle this situation, because a lot of processing to create
	the individual submessages is wasted when they are aggregated plus the
	extra overhead of finding and merging these submessages. Also look at
	the remarks in rtps\_chanegs:data\_or\_gap().
	\item
	What happens on counter overflow, such as with the heartbeat count?
	\item
	Implement fragmented data messages and the nackfrag stuff.
	\item
	The reader doesn't implement functions to add and remove locators as
	the writer does. This is according to the specs but not very logical.
	\item
	Currently, starting a stateless writer will automatically start a
	reader locator as well. This should be removed resulting in starting
	an \lstinline{empty} stateless writer to which reader locators are added
	explicitly. This is considered more inline with the specs (see last
	line p84 which states \lstinline{the_rtps_writer.reader_locator_add(a_locator );}.

	\item
	In the module rtps\_psm, the encode functions use guards on the
	arguments but the decode functions have no error checking. MUST BE
	REVIEWED because decode functions are input to the application and
	should have error checking. No need to have guards on encoding
	functions because their input is always ok.
	\item
	Section 8.4.2.2.1 ``Writers must not send data out-of-order'' states
	that:\\
	``A Writer must send out data samples in the order they were added to
	its HistoryCache.''. We did assume data samples are being sent using
	the sequence number order. Is this an error in the specs or should we
	reconsider our implementation?\\
	DIT MAAKT DE HELE INTERPRETATIE VAN MISSING, EERSTE, LAATSTE ETC.
	VERREKTE LASTIG.
	\item
	For most operations (API calls), the return value has to be mapped to
	the RTPS / DDS return values. For example, creating a reader locator
	might fail and still has to be mapped to a corresponding return value.
\end{itemize}

\chapter{Testing}

\section{Performance testing}

\begin{itemize}
	\item Latency;
	\item Bandwidth consumption and throughput
	\item Resource consumption
	\item Security?
\end{itemize}

\end{document}
