\subsection{Testing introduction}

How to prove that distributed system behaves according to specification ?
Answer to that question may not be immediately clear. In our case, object 
under test is a peer-to-peer system where all communication happens in 
an asynchronous mode. The strongest and the only assumption we can make 
is that the underlying FreePastry library, that implements Pastry protocol, 
works according to a specification and that it has been widely tested by 
the community. Apart from that, we can not assume anything else and we need
to consider testing our system considering different aspects.

\subsection{Test configuration}

At the time of writing, we are using version 2.1 of the FreePastry library.
This library comes with integrated \textit{simulator} that allows us to
simulate quite large peer-to-peer networks without need of additional hardware.
Depending on the test scenario we will try to test our system using variable
number of nodes but no less than 20, however, for the performance, stress and
load testing we will try to generate up to 10.000 nodes, assuming that it is
enough and quite realistic number of nodes in real deployments. Standard test
configuration involves also two parameters, namely:

\begin{itemize}
\item EXPANSION\_THRESHOLD Indicates maximum number of index entries we can store on a node.
\item RETRACTION\_THRESHOLD Indicates minimum number of index entries before those entries
will be \textit{retracted}.
\end{itemize}

The first parameter comes with default value of 100, while the second one is set to
80. \textbf{TBD!!} These parameters are easy to change, and whenever there is a need to adjust these
values, it will be stated separately for each section.
\\
To the best of our knowledge, there is no option to adjust size of a leaf set and it
is being handled automatically by the FreePastry.
\\
For the component based testing we will use latest available version of \textit{JUnit} as
well as \textit{EclEmma} package, version 1.4.3 in order to obtain code coverage statistics.

\\
[TBD!!!] I would like to mention here about the amount and kind of test data we are going to use.

\subsection{Component testing}

Component based testing seems to be the most fundamental testing that can be done
from the developers or testers point of view. In our project we have done component
testing up to the certain degree, however below we publish results and code coverage
statistics only for a few critical components. More test cases can be found as a 
part of appendix.

\subsubsection{IndexImpl test case}

\label{section:bbb}
Generated Tue Dec 1 10:51:40 CET 2009 from \textit{http:\\delicious.com/tag?sort=numsaves}.
\tablefirsthead{\hline \textbf{Expected Result} & \textbf{Actual Result} \\}
\tablehead{\hline \textbf{Expected Result} & \textbf{Actual Result} \\}
\tabletail{\hline}
\begin{xtabular}{ |p{0.45\linewidth}| |p{0.45\linewidth}| }
\hline
bleh & 33 \\ \hline
plehpleh & 44 \\ \hline
fff & 55 \\ \hline
\end{xtabular}
\label{section:aaa}

\subsubsection{IndexEntryImpl test case}
\subsubsection{TagFactory test case}

\subsection{Acceptance testing}

Acceptance testing includes testing of the core functionality of the system. Special attention
has been paid to the usage of distribution and retraction algorithms, as well as, to corresponding 
message flow. Since this is asynchronous system, it is difficult to state assertions. An idea is 
to introduce logging system that will help us trace message flows and based on obtained results 
drive a conclusion. \textbf{TBD!!} Acceptance testing does not involve every aspect of our 
application, instead, we test if we met defined previously success criteria:

\textbf{TBD!!!} These success criteria should be mentioned somewhere else in the report.
\begin{itemize}
\item Behavior correctness: Distribution / retraction algorithms works as described in a solution.
\item Data integrity: Completeness of results, no data lost.
\item Scalability: Load is evenly distributed or at least in a manner that the system performs well.
[TBD!!!] \textit{Comment from Tomasz: It has never been defined what does that mean that 'system performs well'. Should we introduce section that will list functional and non-functional requirements ?}
\item Performance: System behaves better than a similar system with data on a single node.
\item Fairness (usability)
\item Self-maintenance: Maintenance procedures make sure greater data availability and higher robustness.
\end{itemize}

\subsubsection{Taging objects}
\subsubsection{Untaging objects}
\subsubsection{Searching data}
\subsubsection{Leaf set replication}

\subsection{Performance testing}
\subsection{Stress testing}
\subsection{Load testing}
\subsection{Overall test results}