\documentclass[a4paper]{scrartcl}

\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{indentfirst}
\usepackage{paralist}
\usepackage{listings}
\usepackage{alltt}
\usepackage{emp}

\title{236351~--~Distributed~Systems \\ HW3}
\author{Kochelorov~Dmitri~--~320744741\\Artem~Barger~--~317832822}
\date{}

\ifx\pdftexversion\undefined
\usepackage[dvips]{graphicx}
\else
\usepackage[pdftex]{graphicx}
\DeclareGraphicsRule{*}{mps}{*}{}
\fi

\lstdefinelanguage{CSharp}
{
 morecomment = [l]{//},
 morecomment = [l]{///},
 morecomment = [s]{/*}{*/},
 morestring=[b]",
 sensitive = true,
 morekeywords = {abstract,  event,  new,  struct,
   as,  explicit,  null,  switch,
   base,  extern,  object,  this,
   bool,  false,  operator,  throw,
   break,  finally,  out,  true,
   byte,  fixed,  override,  try,
   case,  float,  params,  typeof,
   catch,  for,  private,  uint,
   char,  foreach,  protected,  ulong,
   checked,  goto,  public,  unchecked,
   class,  if,  readonly,  unsafe,
   const,  implicit,  ref,  ushort,
   continue,  in,  return,  using,
   decimal,  int,  sbyte,  virtual,
   default,  interface,  sealed,  volatile,
   delegate,  internal,  short,  void,
   do,  is,  sizeof,  while,
   double,  lock,  stackalloc,
   else,  long,  static,
   enum,  namespace, string}
}

\begin{document}

\newcommand{\XXX}{\textbf{XXX:} }

\maketitle

\newcommand{\Clients}{\textbf{Clients} }
\newcommand{\FSS}{\textbf{Flight Search Service} }
\newcommand{\AAS}{\textbf{Airline Services} }

\section*{Overall description}
The system has 3 main components: \Clients, \FSS, and \AAS. \AAS register
themself to \FSS. \Clients send search queries to \FSS. Upon receipt of such
query, \FSS in its turn queries the registered \AAS, and returns the received
result to the \Clients.

\section*{Clients}
The client queries the \FSS using a SOAP-based Web service provided by the
latter, and displays the received result.

\section*{Flight Search Service}
The \FSS exposes (through WSDL) method to support client's queries:
\begin{lstlisting}[language=CSharp,frame=trbl,
  caption=Methods Exposed to Clients,label=lst:expose_to_clients]
FlightRoutes search(string source, string destination,
  string date);
\end{lstlisting}

Additionally, the \FSS exposes methods (again, through WSDL) for
registration and update of registered Airline servers:
\begin{lstlisting}[language=CSharp,frame=trbl,
  caption=Methods Exposed to Airlines Servers,label=lst:expose_to_servers]
void register(string endpoint, string serverName,
  string alliance, ServersCollection replications);
void update(string serverName, string alliance,
  ServersCollection replications);
\end{lstlisting}

When an airline servers comes up, it
registers itself, specifying the airline alliance it belongs
to, its name and the list of alliance servers replicating its data. This
information is used by the flight search server for load balancing and traffic
saving. Since
server $A$'s information is replicated at some other server $B$, whenever $B$ is
queried for some flight, it can use the $A$'s data it has, and perform a search
on this data either. That is, when the flight search server queries $B$, it gets
also a reply
for $A$ from $B$, and it does not need to additionally query $A$. Next time, the
flight search server needs to query $A$, it can either find another server
different from $B$
which is replicating $A$'s data as well, and query it, or query $A$ itself.

When an airline server fails, there is always another server from the same
alliance that would take care of its data (the replication protocol is covered
below). This server updates the flight search server with the new list of
servers replicating the failed one using the exposed \textit{update} method.
Additionally, when an \textit{update} method is called for some server $S$, the
flight search server knows that the specified server $S$ failed, and it marks
$S$ correspondingly. Later, when $S$ will come up, it would reregister itself
(using \textit{register} method), and the flight search server would mark it
back as alive.

\section*{Airline Servers}
The airline servers are grouped into alliances (clusters). The servers from the
same alliance backup each other, and upon receipt of flight search query in
addition look for a connection flights from other servers of the alliance due to
exercise requirement.

\subsection*{Alliance Management}
All communication between the servers inside a cluster is done using Ensemble.
The cluster servers backup each other. The replication protocol is detailed
below. For now, we will define a \textit{replication degree} of some server $S$
as the number of cluster servers replicating the $S$’s data. In our system, the
replication degree is the same for all servers and equals 4. That is, each
server is replicated by 3 other servers in the cluster. Below we describe the
cluster management in more details.

Below we describe the cluster management in more details.

\paragraph*{A cluster server} has 4 main fields are:
\begin{itemize}
  \item \verb=servername= --- a name of a server (its unique identifier
    during the lifetime of the system; i. e. eternal identifier)
  \item \verb=tickets= --- a server's data
  \item \verb=replicatedAt= --- a list of cluster servers replicating
    this-server's data
  \item \verb=hasReplicationOf= --- a map of cluster servers replicated by
    this-server of form:
    \begin{alltt}\{ servername \(\mapsto\) (replicatedAt, tickets) \}\end{alltt}
\end{itemize}

\paragraph*{On a new view creation}
every cluster server broadcasts an \textit{Introduction message} to other
members of the just created view. The Introduction message contains the
servername and its rank in the new view.

Ensembles provides the servers with a view information; particularly, each
server knows exactly the number of members of the new view, thus it knows when
it has received the Introduction messages from all other members.
Additionally, the members will not fail until a new view is created; thus,
this step is finite and will terminate.

The introduction step has 2 main purposes:
\begin{inparaenum}[\itshape 1\upshape)]
\item to determine the new view members and/or failed members of previous view;
  and
\item to map a rank to its server.
\end{inparaenum}

\paragraph*{A map from servername to its rank\footnote{Or a map from the rank to
the servername; it does not matter since this relation is an invertible
function.}}
is necessary for identifying the servers of a cluster through different views.
The servers in a cluster are uniquely identified by their names. While the
server ranks and addresses assigned by Ensemble might change between views
\footnote{The ranks and addresses might change not necessarily in sequential
views; for example, a server could fail and return with a different address
after a number of views already passed.},
the servername is predefined and immutable. Since in Ensemble the view members
are identified by the assigned ranks, and those ranks are valid for the
particular view only, the introduction step allows to the view members to build
a map from servernames to their ranks in the current view, and to keep the map
from servernames to the servers unchanged through the system lifetime.

\paragraph*{The new servers}, which have just joined the view\footnote{These
servers did not participate in the previous view}, need to
take care of replicating its data in the cluster. In our system the replication
degree of each server is 4, thus the new server chooses 3 random different
servers participating in the current view, and sends them the replication of its
data. The data being sent is of form:
\verb=(servername, tickets, replicatedAt)=; that is, the server sends its name,
its data and the list of servers it has just chosen.

The assumptions given in the exercise guarantee that after a view change there
is a short “grace” period during which there is no view change can occur; so we
utilize this period to run replication protocol, and we can assume that none of
the chosen servers will fail. There is an edge case, when the number of view
members is less than the replication degree. In such case, the number of view
members is used as the replication degree; that is, the new server will
replicate itself on all other view members. When the number of view members
increases back, all servers should increase the number of their
replications up to the replication degree.

Additionally, the new server might receive replications from other view members,
packets of form \verb=(servername, tickets, replicatedAt)=; in this case, it
should store the received data and update its \verb=hasReplicationOf=.

\paragraph*{When some (or all) of the new servers are not really new},
but the servers that participated in the cluster in previous views, and failed
for some reason. For these servers, their old data is replicated and maintained
by the ``cluster". When such servers come back, they might have a new data;
thus, the first thing to be done is to invalidate their old data replicated in
the cluster. In order to do so, every other view members should check whether
they replicate a data belonging to those new servers, and if they do, they
should discard that data. After the introduction step each server can
determine the servernames of such new servers; and by testing whether
\textit{servername} is a key in \verb=hasReplicationOf= field for each of them,
it can know whether it should invalidate some of the replicated data. A data
invalidation is just a removal the corresponding pairs of key and value from
\verb=hasReplicationOf=.

\paragraph*{When there are servers that failed\footnote{The servers that
participated in one off the previous views, but doe not appear as member in a
new view}}, there are following issues arising:
\begin{inparaenum}[\itshape 1\upshape)]
  \item the failed server is not available, and someone should manage its data;
  \item the number of data replications of these servers is below the
    replication degree;
  and
  \item the number of data replications of the servers backed up by the failed
    ones is below the replication degree
\end{inparaenum}

The first issue is to choose a server that is still alive, that should take care
of the data of the failed one. That is, we have a classic leader election
problem. The leader is one of the alive\footnote{The servers that
participate in a current view are denoted alive} servers replicating the failed
one (the servers that contains its name as a key in their
\verb=hasReplicationOf=) that
has a minimal rank. We assume, that at least one such server is
available\footnote{The exercise assumptions say that there is one server that
can fail at each time, and our replication degree is 4}. After the introduction
step, for each failed server there is exactly one chosen leader, let call it
\textit{deputy}. This deputy will manage the replication protocol instead of the
failed server.

After a deputy is elected for  each failed server. The solution for second issue
is simple. The deputy should check whether the number of data replications of
the failed  server is below replication degree, and if so it should increase it.
In order to check that, the deputy checks:
\begin{inparaenum}[\itshape 1\upshape)]
  \item whether the size of \verb=replicatedAt= of the failed server is below
    replication degree\footnote{This might happen when the failed server is
    ``just failed"; i.\,e. it participated in a previous view, but do not
    participate in a current one};
  or
  \item whether one of the servers in \verb=replicatedAt= of the failed one
    has failed
\end{inparaenum}.

In both cases above, the deputy should replicate the data of the failed server,
so the number of replications equals the replication degree, and update its
\verb=replicatedAt=. To do this, it chooses new servers, updates the
\verb=replicatedAt=, sends a replication request to the new servers, and sends
an update of \verb=replicatedAt= to the ``old" replicating servers. The
replication request contains: \verb=(servername, tickets, replicatedAt)=, where
all field belongs of the failed server; i.\,e. \verb=servername= is the name of
the failed server, \verb=tickets= is the data of the failed server, and
\verb=replicatedAt= is the updated list of the servers replicating the  failed
one. The update request contains the following information:
\verb=(servername, replicatedAt)=, where all fields again belong to the failed
server.

The third issue is similar to the one above. Each server should check whether
all servers replicating its data are still alive. If there are some failed
servers in its \verb=replicatedAt=, it should choose new ones, send them a
replication request, and send an update request to the alive servers from
\verb=replicatedAt=.

\subsection*{Replication protocol analysis}
In the replication protocol we proposed above the minimal replication degree of
a node is 4. Thus it is tolerant to up to 3 simultaneous faults. After each
topology change (addition or removal of a node) the replication protocol is run
to make sure the minimal replication degree remains 4.

In this exercise we were requested to implement a replication protocol tolerant
to 1 simultaneous fault and to $n - 1$ non-simultaneous faults. Our protocol
implementation comply with these constraints.

\section*{WCF Extensions}
As a part of this exercise, we have implemented 2 mechanisms through WCF
extensions: \textit{Caching} and \textit{Logging}. They are described below.

\subsection*{Logging}
Each request to the Flight Search service should be logged into a filename given
as a parameter to the flight search server.

\begin{lstlisting}[language=CSharp,float,frame=trbl,
  caption=Addition of Logger Behavior,label=lst:logger_add]
...
foreach (ServiceEndpoint endpoint in
  searchHost.Description.Endpoints)
{
  endpoint.Behaviors.Add(new LoggerBehavior(filename));
}
...
class LoggerBehavior : IEndpointBehavior
{
  public void ApplyDispatchBehavior(ServiceEndpoint endpoint,
    EndpointDispatcher endpointDispatcher)
  {
    ...
    op.ParameterInspectors.Add(new Logger( fileName));
    ...
  }
  ...
}
...
class Logger : IParameterInspector
{
  ...
  public void AfterCall(string operationName,
    object[] outputs, object returnValue,
    object correlationState)
  {
    // Log the input arguments and the elapsed time
    ...
  }
  ..
  public object BeforeCall(string operationName,
    object[] inputs)
  {
    // Store the current time and input arguments
    ...
  }
}
\end{lstlisting}

In order to implement this cool feature, during a flight search service
initialization we add for each of its endpoints a \textit{LoggerBehavior}
(see Listing~\ref{lst:logger_add}), which in its \textit{ApplyDispatchBehavior}
method adds a new parameter inspector: \textit{Logger}. \textit{Logger} purposes
are:
\begin{itemize}
  \item Upon search query receipt, it stores current time and the input
    parameters (query parameters).
  \item After the search is performed, and before the response is dispatched
    back to the client, the Logger calculates the total time taken by the query
    and writes a record including this time and input parameters to the log
    file.
\end{itemize}

Thus, the addition of \textit{Logging} feature is transparent and does not
influence in direct way other code.

\subsection*{Caching}
The caching is done by overriding the standard WCF's \textit{operation invoker}
(see Listing~\ref{lst:invoker_add}) with our one. The meaning of the thing is
that each time an operation is invoked (in our example it is \textit{search}
operation), our code will be run rather than standard one.

\begin{lstlisting}[language=CSharp,float,frame=trbl,
  caption=Addition of Cache,label=lst:invoker_add]
...
[ServiceContract]
interface ISearchService
{
  [OperationContract]
  [CacheOperationBehavior]
  FlightRoutes search(string source, string destination,
    string date);
}
...
class CacheOperationBehaviorAttribute :
  Attribute, IOperationBehavior
{
  ...
  public void ApplyDispatchBehavior(
    OperationDescription operationDescription,
    DispatchOperation dispatchOperation)
  {
    IOperationInvoker invoker = dispatchOperation.Invoker;
    dispatchOperation.Invoker = new
      CacheOperationInvoker(invoker);
  }
  ...
}
...
\end{lstlisting}

The class \textit{CacheOperationInvoker} simulates the standard invoker
behaviour in all except \textit{Invoke} method. Upon the creation of
\textit{CacheOperationInvoker}, it accepts the standard invoker as an argument
in the constructor and stores it. Whenever \textit{Invoke} method is called, the
first thing to be checked is  whether the query to be implemented is already
cached. If it does, the cached result is returned rather than performing a new
search query. Otherwise, there is nothing to do, but to perform the search using
the stored standard invoker and store its output in the cache.

\begin{lstlisting}[language=CSharp,float,frame=trbl,
  caption=Addition of Cache,label=lst:invoker_add]
...
class CacheOperationInvoker : IOperationInvoker
{
  ...
  public CacheOperationInvoker(IOperationInvoker invoker)
  {
    this.invoker = invoker;
  }
  public object Invoke(object instance, object[] inputs,
    out object[] outputs)
  {
    // If there is an entry in the cache for this query,
    // return its output. Otherwise use the standard invoker,
    // and store the output in the cache.
    ...
  }
  ...
}
...
\end{lstlisting}

The most problematic part of \textit{Cache} feature is to keep the cache
``consistent" with alliance cluster. The alliance servers may fail and come up
with a new data. Thus the data in an alliance cluster is a subject to change;
and the cached results on the search server may be outdated.

In order to keep the cache ``consistent" with alliance, the alliance servers
should inform the cache about the changes in the alliance topology. As
we said above, the flight search server exposes two methods for airlines servers
(see Listing~\ref{lst:expose_to_servers}) \textit{register} and \textit{update}.
When a new airline servers come up either for a first time or after a failure,
it registers itself in flight search server using the \textit{register} method.
In both cases it might come with a new data, thus the flight server invalidates
all cache entries containing flights of this airline server (either as a main
flight or as a connection flight). As we saw above, when an airline server
fails, there is another server called \textit{deputy} in its alliance (cluster)
that takes control of a data of the failed one. That is, upon a failure of a
cluster server, the ``whole" cluster data remains unchanged; thus the cache
is valid, and there is nothing to do in such case.

Thus, the addition of \textit{Caching} feature is transparent and does not
influence in direct way other code.

\section*{Search}
The search is performed as follows. The client invokes \textit{search} method
exposed by the flight search server. The request is dispatched to the
latter. The request is received at the flight search server, the parameters
are deserialized, and are passed to our registered parameter inspector, i.\,e.
\textit{Logger::BeforeCall()}. Then the operation invoker is called, i.\,e.
\textit{CacheOperationInvoker::Invoke()}. If the query is cached, the result is
retrieved from the cache and returned, \textit{Logger::AfterCall()} is called,
the opertaion is written to the log file, and the result is sent back to the
client.

When the query is not found in cache, the flight search server should query the
alliances using the standard invoker. Each airline server exposes three methods
for search:

\begin{lstlisting}[language=CSharp,frame=trbl,
  caption=Methods Exposed to Flight Search Server,label=lst:expose_to_fss]
// Select all flight from a given source to a given
// destination at a given date
public TicketsCollection search(string source,
  string destination, string date);
// Select all flights which starts at source on given date.
public TicketsCollection searchFrom(string source, string date);
// Select all flights wich endpoint is destination.
public TicketsCollection searchTo(string destination,
  string date);
\end{lstlisting}

The flight search server uses these three methods to perform a search. It
iterates over a list of alliance servers; for each such server it searches for
direct flights using \textit{search} method, and for indirect ones using
\textit{searchFrom} and \textit{searchTo}. When some server $S$ being queried
replicate a data of other alliance server $T$, as we discussed above, $S$
perform the search in $T$'s data as well, and the result returned by $S$ is
based on both $S$ and $T$ data. Thus, there is no need to additionally query
$T$\footnote{Off course, there is an edge case, when $T$ is the deputy of some
failed server. In such case, the flight search server should query $T$
anyway.} \footnote{We could optimize our solution by implementing some algorithm
for choosing the alliance servers that should be queried by the flight server so
the total number of queries is minimized. But since the complexity of
implementing such algorithm is high (it seems to be NP-hard problem, and an
approximation algorithm is to be implemented), we did not do this.}.

Finally, the flight search server aggregates the results, and sends
them back to the client. But before the results are serialized and actually sent
to the client they are cached at the flight search server.

\end{document}
