\documentclass[letterpaper, 12pt, titlepage]{article}
%=======Unpackage Things===============
\usepackage{array}
\usepackage{color}
\usepackage{colortbl}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{listings}
%\usepackage{fullpage}
\usepackage{epsfig}
\usepackage{amsmath}
\usepackage{latexsym}
\usepackage{amssymb}
\usepackage{ulem}
\usepackage{amstext}
\usepackage{float}
\usepackage{mathrsfs}
\usepackage{geometry}
\geometry{
    top=9.5in,            % <-- you want to adjust this
    inner=9.5in,
    outer=9.5in,
    bottom=9.5in,
    headheight=35pt,       % <-- and this
    %headsep=12ex,          % <-- and this
    margin=4.4cm,
}
\usepackage{setspace}
\usepackage{enumerate}
\usepackage{url}
\usepackage{overcite}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{color}
\usepackage{fullpage}
\usepackage{titlesec}
\titlespacing*{\chapter}{5pt}{-50pt}{20pt}
\titleformat{\chapter}[display]{\normalfont\huge\bfseries}{\chaptertitlename\ \thechapter}{50pt}{\Huge}



\graphicspath{ {testImages/} } 

\newcommand{\PreserveBackslash}[1]{\let\temp=\\#1\let\\=\temp}
\newcolumntype{C}[1]{>{\PreserveBackslash\centering}p{#1}}
\newcolumntype{R}[1]{>{\PreserveBackslash\raggedleft}p{#1}}
\newcolumntype{L}[1]{>{\PreserveBackslash\raggedright}p{#1}}



\definecolor{bleudefrance}{rgb}{0.19, 0.55, 0.91}
\definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66}
\definecolor{babyblueeyes}{rgb}{0.63, 0.79, 0.95}
\definecolor{skyblue}{rgb}{0.53, 0.81, 0.92}
\definecolor{powderblue}{rgb}{0.69, 0.88, 0.9}
\definecolor{radicalred}{rgb}{1.0, 0.21, 0.37}

\pagestyle{fancy} { % define first page header and footer
\fancyhead[L]{HADPSS}
\fancyhead[C]{WonderFour}
\fancyhead[R]{\thepage~ of \pageref{LastPage}}
\fancyfoot[L]{}
\fancyfoot[C]{}
\fancyfoot[R]{}
}
\setlength{\headheight}{15pt}
\begin{document}

%==title==
\title{COMP6521}
\thispagestyle{empty}
\setcounter{tocdepth}{2}
\newpage
\begin{center}
    {\bf\LARGE COMP6321:$\mathcal{D}$istributed $\mathcal{S}$ystem $\mathcal{D}$esign}

 \vspace{1cm}

     {\bf\Large  $\mathcal{P}$roject }

     \vspace{1cm}

 {\bf\Large Team: $\mathcal{W}$onder$\mathcal{F}$our   }




\vspace*{2.5in}
\begin{table}[htbp]
\caption{Team Memebers}
\begin{center}
\begin{tabular}{|c| c|}
\hline
Name & ID Number \\
\hline\hline

Xiaodong Li &7136609 \\
Xunrong Xia &6547079 \\
Xuefei Shi & 6832407\\
Omar Hachami & 6710999 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{center}
%\clearpage

\pagestyle{fancy}


\clearpage
\setcounter{page}{1}
\tableofcontents
 \clearpage
 
 \section{Introduction}
 \vspace{0.3cm}

\subsection{Purpose}


This document provides a solution of the project: Highly Available Distributed Player Status System (HADPSS) of the course COMP 6231. We claim our solution to be software failure tolerant, crash failure tolerant, scalable, and high available.

The software failure tolerance is ensured by the design choice of having more than one active replica participating in the production and the selection of the best result to provide to the customer.

The crash failure tolerance is ensured by the design choice of having a dynamic number of replicas having the ability to provide the customer with the response for its request.

Our solution is scalable because of its capacity to have different group of active replicas and managing load balancing between different groups.

Our solution is high available as any failing replica can be replaced by one replica from the group of non-active replicas. As the replicas from the group of non-active replicas are ready to used there will be no delay required for the synchronization and which would require to stop processing the client requests until the new replica is synchronised with the other from the group of active ones. To ensure this the system through a mechanism of store and forward sends all the requests updating the data to the group of the non-active replicas so they can keep being up to date. The result of the execution of the requests at the level of the non-active replicas doesn’t affect the response send to the client since it’s done after responding to the client request. This result affects only the readiness of a replica to be used as an active one. Which means that if for any reason a non-active replica fails to execute a request, the replicas manager will eliminates this replica from the list of non-active replicas and it will never be used as an active one as its data is not up to date.

\subsection{Context}
The document includes system design and architecture, the data structure, algorithm and protocol, implementation, dataflow and tests scenarios and results.

\section{Requirements}

This section provides the requirements for HADPSS.
\subsection{HADPSS Model}
The HADPSS server system has three replicas running in different hosts on the network. One of the replicas is the designated leader.

~

The frontend receives the request from the client via CORBA communication, and forward the request via UDP communication to the replica leader.

~

The replica receives the request and sends it  to servers on the same host and then passes the reply from the servers to the frontend.


~

The replica leader has all the functions as replicas, besides it broadcasts the request received from the front end and does the job of failure detection. Moreover, it sends the comparision result to replica manager and send the execution result back to the front end. 

~

The replica manager creates and initializes the actively replicated server subsystem. It also manages the server replica group information, maintains if a replica has produced incorrect result and replaces a failed replica that produces incorrect result three times successively with another good one;

~

The server receives the request from the replica and executes it,then return the result back to the replica.

\subsection{Function}

Our HADPSS performs the following operations:

\begin{flushleft}
Admin Operations:
\end{flushleft}
\begin{itemize}

\item getPlayerStatus(AdminUserName,AdminPassword,IPAddress)

The administrator could get the player status of all servers on the same host at a time.

\item suspendAccount(AdminName,AdminPassword,AdminIPAddress,userNameToSuspend)

The administrator could suspend a player from the same server. If the player with provided username doesn't exist, a detailed message will be sent back and write into log.

\end{itemize}

Player Operations:
\begin{itemize}
\item createPlayerAccount(firstName,lastName,age,userName,password,IPAddress)

When a player invoke this method, the server at the same geo-location will attempt to create a new account with provided information. If the user name already exist, a detailed message will be sent back and write into log.

\item playerSignIn(userName,password,IPAddress)

When a player invokes this method, the server at the same geo-location will attempt to check if (1)the user name exist,(2) the password matches the user name and (3) the player is not currently sighed in. If all the three conditions are met, the server will set the status of the player to online, return a confirmation to the player and update the player status. Else, a detailed message will be sent back and write into log.

\item playerSignOut(userName,IPAddress)

When a player invokes this method, the server at the same geo-location will attempt to check if (1)the user name exist,(2) the password matches the user name and (3) the player is currently sighed in. If all the three condition is met, the server will set the status of the player to offline, return a confirmation to the player and update the player status. Else, a detailed message will be sent back and write into log.

\item transferAccount(userName,password,OldIPAddress,NewIPAddress)

When a player invokes this method, the server at the same geo-location will check if the user name exists and the server at the geolocation with NewIPAddress will check if the user name doesn't exist . If both the two condition are met, the entire account will be transfer from the old geo-location to the new one. 

\end{itemize}

\section{General Design}


\subsection{Architecture}

Here is the figure of the general design of HADPSS:

\begin{figure}[H]
\centering
\includegraphics[scale=0.32]{OverallArchitecture.eps}
\caption{overall architecture.}
\label{fig:overall architecture}
\end{figure}

\subsection{module responsibilities}

The solution implemented for this project allows starting any number of replicas based on the available resources and requirements on high availability and fault tolerance. The solution is crash failure tolerant and software failure tolerant.

~

The crash failure tolerance property is ensured by having the possibility to start any number of replicas based on the resources and the requirements on high availability. For N replicas started, the system may tolerate up to N-1 failures.

~

The software failure tolerance property is ensured by the design choice of the execution of the same request at the level of all the replicas as it’s done by the replicas leader. In this configuration the role of the replicas leader, regarding the execution of the request is limited to executing the request locally and routing the same request to all the active replicas instead of providing them with the updates resulting from the current execution. In this context, each replica produces its own result after executing the request locally. All the replicas send their results to the replicas leader which has the responsibility of selecting the best result based on the majority among different results. Following the operation selecting the best result the replicas leader sends a notification to the replicas manager regarding the status of each results as being part of the majority or not. Based on this notification the replicas manager is able to set the counter of failures for each replica. The counters of failure for a replica is incremented each time the replicas manager receives a notification as the replica had produced a result different from what was produced by the majority of replicas. Each time the replica manager receives a notification as one replica has produces the same result as the majority the counter of failures for the current replicas is set to zero.

\begin{itemize}
\item \textbf{Front end:} receives request from client, forward it to replica leader, get response from replica and send it back to the client. The requests from the client are received through CORBA middleware. After receiving the requests they are marshalled and send through a UDP link to the replicas leader. The sending of UDP request through the UDP link is synchronized to ensure the matching of the requests and their response and moreover to avoid opening more than one port for this dedicated task, at the level of replica leader. The ORB references generated for the frontend objects are exported to external files so that the clients can access them and invoke the remote objects at the frontend level. The location of these references are defined in property file as parameters to the application the same file may be used by the client application to locate these references.

\item \textbf{Replica leader: }Runs a UDP server to listen and receive UDP messages through a parameterized port number. The replica leader UDP port number is made available in an external file in order to be available to both frond end the replicas leader itself. The replicas leader server receives requests from front end through its UDP port. Upon receiving a new request the replicas leader requests the list of the active replicas to the replicas manager. After receiving the requested list the replicas leader send UDP message to each replica in the replicas active list provided by the replicas manager. Moreover the replicas leader executes the request locally by invoking the CORBA object at the level of appropriate local server. As for each replica, the replicas leader is connected to three servers representing Europe, North America, and Asia. Each request is sent to the appropriate local server based on the information provided on the request specifically the IP address. The results collected from the local execution and from the active replicas are analysed by the replicas leader in order to elect the best result based on the majority. 

After selecting the best result the replicas leader send back the result to the front end server, and notify the replicas manager about the status of each replica as being part of the majority or having produced a different result. 

The replicas leader multicasts the requests that updates the data to the group of non-active replicas so they can keep being up-to-date and ready to replace any failing replicas from the group of active ones. By this schema we ensure the high availability of the system.

\item \textbf{Replica:}the first step when a replica is started it notify the replicas manager so it registers itself as an active or non-active replica based on the provided parameters at its launching. By registering itself the replica will receive each request from the replicas leader and has to provide a result after it execution. After its registration the replica runs a UDP server to listen and receive UDP messages through a parameterized port number. The replica UDP port number is made available in an external file in order to be available to both replicas leader and the replica itself. The replicas receives the UDP request and executes the request locally by invoking the CORBA object at the level of appropriate local server based on the information provided on the request. As for the replica leader, each replica is connected to three local servers.

\item \textbf{Replica manager:} the replicas manager receives the replicas registration requests and manages a list of active replicas so it can provide the replicas leader with this list each time it is requested. The replicas manager also receives notification about the results of processing requests by each replica so it can manage the failure counters. The failure counter for each replica is incremented for each erroneous result produced by this replica and also set to zero for each right result produced. Upon three successive erroneous results from the same replica the replicas manger disconnect this replicas and connect a new one from the group of the non-active replicas.

\item \textbf{Local Server:} at their starting the local servers register their ORB references in an external file specified as parameter and accessible for their corresponding replicas. Each local server receives its relative requests from its relative replica, processes it and returns reply.

\item \textbf{Client:}as a multithreading application, it performs the administrator and player operation for player creation, sign in, sign out, and account transfer, and the administrator operation for getting the players status and suspending accounts.

\item \textbf{Common components: }the common components of the system are grouped on specific API so that they can be reused at the level of different servers to avoid clones at the level of source code. This component contains the tools for log generation, UPD messages management, tools for marshaling and marshalling of remote method invocations.
\end{itemize}

 \section{Detailed Design}


\subsection{Techniques and Algorithm}

The front end receives a client request as a CORBA invocation and it will forward the request to the replica manager and receive the response and send it back to the client. The client-frontend and server-server communication will be via CORBA.

The entire server system (Replicas, front end, and replica manager) runs on a local area network,so the communicate among the components uses UDP protocol. The frontend-replica leader, replica leader-general replica, replica-replica manager communication will be via unreliable UDP.

For each UDP communication, when a request is sent, the sender will wait for a response from the receiver as a cofirmation of request reception. If after a certain time, say 12 seconds, the sender doesn't  get any response from the reciever, it will send the request again.

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{UDP.eps}
\caption{Reliable UDP}
\label{fig:overall architecture}
\end{figure}

In order to avoid message loss and guarantee correctness, a FIFO strategy is needed when doing execution. In our implementation, the UDP sending and receiving operation between front end and the replica leader is synchronized, only one transmission is allowed at a time. All the requests arriving at the front end will queuing up, each request in the queue waits for the response of the request just before it, only then would it be forwarded to the replica for execution.


\subsection{Implementation}

In this system,all three replicas will be deployed into three different computers respectively, which could be considered as different hosts on the network.

As Replicas, front end and replica manager are regarded as the entire server system, which should be running on a local area, and the front end mediates between the client and the replica leader, so the front end and replica manager will be put together with the replica leader.

Therefore the deployment diagram should be like:

~

\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{deploy.eps}
\caption{Deployment diagram}
\label{fig:overall architecture}
\end{figure}

\subsection{Data Flow}
Generally, the client sends request to the front end, then the front end marshalls the request and forwards it to the replica leader using UDP. After recieving the request, the replica leader broadcasts the request to the other replicas, and all the replicas then unmarshall the request and do the execution. When the execution is done, every replica sends the result to the replica leader, which will do the failure detection and send the detection result to the replica manager. The replica manager will then replace the replica with error result while the replica leader marshalling the correct result and sending it back to the front end. Then the front end unmarshalls the reply and send it back to the client.

\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{Dataflow.eps}
\caption{Overall data flow}
\label{fig:overall architecture}
\end{figure}

\subsubsection{Startup data flow}

Before the startup phase, we write the number of both activated and inactivated replicas in the properties files. While startup the system, all the replicas will register to the replica manager with their own status and the replica manager will keep a list of replicas with their status. Then the replica manager will notify the leader of the activated replicas, and the leader will set connection with them.
\newpage

~

\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{StartupDF.eps}
\caption{Startup data flow}
\label{fig:overall architecture}
\end{figure}

\subsubsection{Normal operation data flow}

For all normal operations, we could take player account creation as an example.

First of all, the client send a request with creation operation and all the parameters needed for this operation to the front end, and then the ffront end marshalls and forwards the request to the replica  leader. The leader then broadcast the request. Then all the replicas, including the leader, unmarshall and execute the request, and send the result to the leader. The leader compares the results and send the comparison to the replica and send the right execution back via the reversed path.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{CreationDF.eps}
\caption{Creation data flow}
\label{fig:overall architecture}
\end{figure}

\subsubsection{Failure data flow}
While one replica has gave wrong result for three times, failure is considered occurred to it. The replica manager keeps an counter of each activated replica, every time the leader sends the comparison result to the replicamanager, it updates the counters.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{FailureDF1.eps}
\caption{Failure data flow 1}
\label{fig:overall architecture}
\end{figure}

When it's the third time of sending wrong result for one of the replicas(say replica 2 in the Figure 7), the replica manager will remove the failed replica from the list and send an command to it to force it to disconnect from the leader.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{FailureDF2.eps}
\caption{Failure data flow 2}
\label{fig:overall architecture}
\end{figure}

After then, the replica manager activate an backup replica from the list, and then sends a notification of the updated list to the replica leader. Then replica leader, after receiving the notification, would set up connection with the new activated replica.

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{FailureDF3.eps}
\caption{Failure data flow 3}
\label{fig:overall architecture}
\end{figure}


\subsection{UML Diagrams}
\subsubsection{The Class Diagram}
\includegraphics[scale=0.5]{ClassDiagram1}

\subsubsection{The Sequence Diagram}
\includegraphics[scale=0.5]{SequenceDiagram1}

\section{Tests}
\subsection{Preparation}
In order to execute the HADSPP, we need to start the Replica Manager, Front End, the Replicas, and the corresponding Local Servers. All the replicas will be start and registered in the Replica Manager, three of them are active, and the rest of them are not active, the inactive Replicas are used for replacing the failed replica that produced incorrect result three times successively. 

Replicas manager logs:
\textit{(replica 0 is the leader, 1,2,5,9 are activated, 7,8 are inactivated.)}
\newpage
~

\begin{flushleft}
\includegraphics[scale=1]{test1}
\end{flushleft}

Replicas leader:
\begin{flushleft}
\includegraphics[scale=1]{test2}
\end{flushleft}

Information of players for testing:\textit{(The Player’s Status is offline as default.)}
\begin{table}[H]
\begin{center}

\begin{tabular}{c c c c c c}
\hline
User Name&	First Name	&Last Name&Age&	Password 	&IP address\\\hline
Clinton1	&Bill	&Clinton	&68	&bcbc1234	&132.168.1.4\\
Clinton2	&Bill&	Clinton&	68	&bcbc1234	&132.168.1.4\\
Clinton3	&Bill	&Clinton	&68	&bcbc1234	&132.168.1.4\\
Clinton2	&Bill	&Clinton	&68	&bcbc1234	&93.168.1.4\\
Clinton11	&Bill	&Clinton	&68	&bcbc1234	&132.168.1.4\\
Clinton12	&Bill	&Clinton	&68	&bcbc1234	&132.168.1.4\\\hline
\end{tabular}
\end{center}
\end{table}
Failure Test:
\newpage

~

\begin{table}[H]
\begin{center}
\begin{tabular}{c c }
\hline
User Name&	Fail Replica\\\hline

Clinton9001&     Replica\_9\\

Clinton9002&	Replica\_9\\
Clinton9003&	Replica\_9\\\hline
\end{tabular}
\end{center}
\end{table}

\subsection{Players CREATE A NEW ACCOUNT in NA server}
\begin{flushleft}
Actors: Players with username Clinton1, Clinton2, Clinton3, respectively.

Action: Each player creates a new account in NA server.

Expected Result: All operations are Succeed. 
\end{flushleft}
\subsection{GET PLAYER STATUS}
\begin{flushleft}
Actor: Administrator

Pre-condition: The NA server has three accounts. 

Action: NA Administrator gets player status. 

Expected Result: Only NA server would have 3 offline players.
\end{flushleft}

Response in each part of the system: 

\subsubsection{Client:}
\begin{flushleft}
\includegraphics[scale=1]{test3}
\end{flushleft}

\subsubsection{Front end:}
\begin{flushleft}
\includegraphics[scale=1]{test4}
\end{flushleft}

\subsubsection{Replica leader:}
\begin{flushleft}
\includegraphics[scale=1]{test5}
\end{flushleft}

\subsubsection{Replica manager:}
\begin{flushleft}
\includegraphics[scale=1]{test6}
\end{flushleft}

\subsection{CREATE AN ACCOUNT with an existed username}
\begin{flushleft}
Description: This can test whether the same username can be created in the same server.

Actor: Player

Precondition: The server has an account with the same username. 

Action: Create a new account in NA server, with username: Clinton2

Except Result: Failed. Since the username has already exist.

\subsection{CREATE AN ACCOUNT with an existed username in different server}
Description: This can test whether the same username can be created in different servers.

Actor: Player Clinton2

Precondition: The other server has the same username.

Action: Create a new account in NA server, with username: Clinton2

Expected Result: Succeed. 

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 3 offline players. EU server would have 1 offline players.

\subsection{SIGN IN}
Actors: Players with the username Clinton1, Clinton2 respectively.

Precondition: The account are exist. 

Action: Both players sign in their accounts.

Expected Result: Succeed.

~

Response in each part of the system: 
\subsubsection{Client:}
\includegraphics[scale=1]{test7}
\subsubsection{Front end:}
\includegraphics[scale=1]{test8}

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 1 offline players, 2 online players. EU server would have 1 offline players.

\subsection{SIGN OUT}
Actors: Players with the username Clinton1, Clinton2 respectively.

Precondition: The account are exist. 

Action: Both players sign out their accounts.

Expected Result: Succeed. 

~

Response of the system:
\subsubsection{Client:}
\includegraphics[scale=1]{test9}
\subsubsection{Front end:}
\includegraphics[scale=1]{test10}
\subsubsection{Replica leader}
\includegraphics[scale=1]{test11}
\subsubsection{Replica manager}
\includegraphics[scale=1]{test12}

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 3 offline players. EU server would have 1 offline players.

\subsection{TRANSFER ACCOUNT with no conflict. }
Actor: Player Clinton1.

Precondition: Clinton1 exist in the old server, and the new server do not have the same username as Clinton1.

Action: Clinton1 transfer his account from NA server to EU server.

Expected Result: Succeed. 

~

Response in each part of the system:
\subsubsection{Client:}
\includegraphics[scale=1]{test13}
\subsubsection{Front end:}
\includegraphics[scale=1]{test14}
\subsubsection{Replica leader}
\includegraphics[scale=1]{test15}
\subsubsection{Replica manager}
\includegraphics[scale=1]{test16}

\subsection{TRANSFER ACCOUNT with conflict.}
Description: This can test whether the account can be transferred if the new server has an account with the same username.

Actor: Player Clinton2. 

Precondition: The old server and the new server have an account with the same username. 

Action: Clinton2 transfer his account from NA server to EU server.

Expected Result: Failed. Since the new server has the account with the username Clinton2.

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 2 offline players. EU server would have 2 offline players.

\subsection{SUSPEND ACCOUNT with the username does not exist}
Description: This can test whether the administrator can suspend an account with the username does not exist.

Actor: Administrator

Precondition: The account with username Clinton1 has transferred to EU server, that is, it does not exist. 

Action: NA server’s administrator suspends the account with username Clinton1. 

Expected Result: Failed. 

\subsection{SUSPEND ACCOUNT with the username exist}
Description: This can test whether the administrator can suspend an account with the username exists in the server.

Actor: Administrator

Precondition: The account with username Clinton3 exist in NA server. 

Action: NA server’s administrator suspends the account with username Clinton3. 

Expected Result: Succeed. 

Response of the system: 

~
\subsubsection{Client:}
\includegraphics[scale=1]{test17}
\subsubsection{Front end:}
\includegraphics[scale=1]{test18}

 
\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 1 offline players. EU server would have 2 offline players.

\subsection{Failure test of Replica}

Description: To test the replacement of the failed replica that produced three incorrect result successively, we simulate the condition like, if we create a player with the username contains “*00” three times successively, the corresponding replica, “replica\_*” would failed.

Action: Create three accounts successively, the usernames are Clinton 9001, Clinton9002, Clinton9003 respectively. 

Excepted Result: Succeed. But Replica\_9 will failed.

Response in each part of the system: 

~
\subsubsection{Client:}
\includegraphics[scale=1]{test19}
\subsubsection{Front end:}
\includegraphics[scale=1]{test20}
\subsubsection{Replica leader}
\includegraphics[scale=1]{test21}
\subsubsection{Replica manager}
\includegraphics[scale=1]{test22}

\subsection{Test the new active Replica}
Description: Since we have replace the failed replica with a new replica, we need to test whether the new replica works fine.

Actor: Player.

Pre-condition: A new Replica was active. 

Action: Create a new account in NA server, the username is Clinton12. 

Expected Result: Succeed.

Response in each part of the system:

~

\subsubsection{Client:}
\includegraphics[scale=1]{test23}
\subsubsection{Front end:}
\includegraphics[scale=1]{test24}
\subsubsection{Replica leader}
\includegraphics[scale=1]{test25}
\subsubsection{Replica manager}
\includegraphics[scale=1]{test26}

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 2 offline players. EU server would have 2 offline players.

\subsection{Failure test of Replica without three wrong results successively.}
Description: Since the replica would consider failed only when it produces wrong result three times successively, we would test whether it will failed when it produces wrong results three times discontinuously.

Actors: Players

Action: Create accounts with the username in the order: Clinton 5001$->$ Clinton11$->$ Clinton5002$->$Clinton5003

Excepted Result: All operations are succeed, no Replica failed. 

\subsection{GET PLAYER STATUS}
Actor: Administrator.

Precondition: The result of previous operations are same as the expected result.

Action: NA Administrator get player status. 

Expected Result: NA server would have 6 offline players. EU server would have 2 offline players.

\end{flushleft}
\section{Improvements}


\subsection{Dynamic Number of Replicas}

Any number of replicas could be initialized at the startup phase of the system, some of which could be running replicas while the others are inactivated as backups. 

Running replicas would register at the replica, and the replica manager keeps a list of registered replicas and notify the replica leader. When one replica fails, the replica would remove it from the list, activate a backup replica and add it into the list, then send notification to the replica leader.

With this implementation, the robustness of the system will be enhanced. By adding replicas, the system could stand more than one failure. It could survive $f$ failures if we activate $2f+1$ replicas.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Dynamic.eps}
\caption{Dynamic Number of replica}
\label{fig:overall architecture}
\end{figure}

\subsection{Recovery}

To recover the datatbase of the newly activated replicas, we could use the following strategy.

After sending response to the front end, the leader could broadcast the request to the inactivated replicas to make sure they are up-to-date. And same as the activated replicas, if any of the backup replicas give wrong result three times, it would be removed from the list stored in replica manager. 

With this improvement, every backup replicas has the latest database, therefore when it is activated to replace a former activated replica, no data loss or inconsistence will happen, which as well makes the system much more reliable.

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{Recovery.eps}
\caption{Recovery}
\label{fig:overall architecture}
\end{figure}

 \section{Data Dictionary}
\begin{itemize}
\item \textbf{Front end:} The front end acts as a mediator between client and the server system. It forwards client request to the server system and passes the reply from the server system to the client.
\item \textbf{Replica:}Remote objects having the same reference but located on different hosts are considered as replicas.
\item \textbf{Replica leader:} The replica leader is a special replica, it has all the function as replica but also maintains information of all the replicas and has the function of failure detection. It communicates directly with the front end and the replica manager and broadcasts request to all the replicas. In our implementation, the replica leader would never fail.
\item \textbf{Replica Manager:}Replica Manager mainly checks the correctness of the execution result of the replicas and replace the failed replica with another good one.
\end{itemize}



\end{document}
