% qa.tex

\newcounter{c_test}%counts  the testcases
\setcounter{c_test}{0} 

\chapter{Quality Assurance}
\label{cha:qa}
The aim of this chapter is to show and prove the satisfaction of the non functional requirements within the scope. The accomplished experiments, test cases and scenarios are organized depending on the components. 


\section{Data Processing Component}
\label{sec:qa-data-processing}




\paragraph{PCB}
A prototype node is equipped with a specially designed PCB containing the infrasonic microphone, a 4th-order Butterworth active low-pass filter, and an amplifier. The following test case investigates the quality of the circuit.
\vfill
\refstepcounter{c_test}\label{test:filter}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:filter}\\
	Description      & The gain for various frequencies of the implemented Butterworth filter are measured.\\
	Fit criterion for& REQ\,\#\ref{req:cutoff-freq}
\end{tabularx}
\vfill

\begin{figure}[tb]
	\center
	\includegraphics[width = \linewidth]{diagrams/circuit-gain-001}
	\caption[Measured amplifier response]{Measured amplifier response; red - theoretical 4th-order Butterworth, blue - filter without amplifier, green - filter + amplifier}
	\label{fig:circuit-gain-001}
\end{figure}

For the breadboard construction the prototype PCB is used, i.e. manually soldered, without the microphone and the appropriate load resistor. For the power supply a 3\,V source is used. A function generator is connected to the couple capacitor C1 and ground. It produces a sinus wave with an amplitude of $U_{i-max}$= 344\,mV. The PCB output pin and ground is connected to a voltage oscillograph. The amplitude of the PCB output signal $U_{o-max}$ is measured. The output signal swings around 1.5\,V.



The gain $G_{PCB}$ of the circuit depending to the input frequency $f_i$ is computed by the equation (fig.~\ref{fig:circuit-gain-001},green curve):

\begin{equation}
	G_{PCB}(f_i) = log(\frac{U_{o-max}(f_i)-1.5\,V}{U_{i-max}})*10\,dB
\end{equation}

To measure the gain of the AC amplifier I inject a frequency which is passed by the filter with unity gain. In other words I search the output voltage maximum depending to the input frequency, what is by f=2.2\,Hz and $U_{o-max}$=2.844\,Hz . The gain at this point is: 

\begin{equation}
	G_{amp} = \frac{2.844\,V-1.5\,V}{U_{i-max}}=3.95
\end{equation}

The calculated value is G=4.25. To get the gain $G_{filter}$ assigned to the filter the amplifier gain must be undone (fig.~\ref{fig:circuit-gain-001},blue curve):

\begin{equation}
	G_{filter}(f_i) = log(\frac{U_{o-max}(f_i)-1.5\,V}{3.95 U_{i-max}})*10db
\end{equation}

The theoretical amplitude response of a 4th-order Butterworth filter is plotted for comparison(fig.~\ref{fig:circuit-gain-001},red curve) and accords equation~\ref{eq:trans-butter-2nd}. Two obvious deviations are visible. The first one below 1\,Hz is explained by the high-pass effect of the couple capacitor and maybe even by the high-pass effect of the capacitor of the AC amplifier. The second one close to 100\,Hz can be a measurement error caused by the fact that the output voltage in this range is nearly the offset DC of 1.5\,V.

The cutoff frequency is defined as the frequency for which the filter returns $\sqrt{1/2}$ of the pass-band voltage. The cutoff frequency $f_c$ can be experimentally determined by finding the frequency which fulfills following equation:

\begin{equation}
	 log(\sqrt{1/2})*10\,dB = -1.51\,dB = G_{filter}(f_c)
\end{equation}

The realized cutoff frequency is $f_{c-low}$=19\,Hz. The cutoff frequency of the high-pass effect can be determined in the same way and is $f_{c-high}$=0.14\,Hz which is acceptable.

The results show moderate cutoff frequencies but a significant deviation of the AC amplifier what could indicate an unpurified PCB creation. Anyhow, an explicit gain factor is not given, hence the error can be neglected.  

\paragraph{The Data Acquisition}

Figure~\ref{fig:sample-001} shows the sampling of the infrasonic signal during the slamming of a door using the presented active low-pass filter. The sample is an extraction of a record of more than 5000 values. Conspicuous is, that the median is not by the half of the ADC12 resolution, i.e. 2048. The bipolar signal is not exactly mapped to the input signal of the ADC12.
\vfill
\refstepcounter{c_test}\label{test:acqui}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:acqui}\\
	Description      & The ADC12 samples an infrasonic with a frequency of 200\,Hz.\\
	Fit criterion for& REQ\,\#\ref{req:adc}, REQ\,\#\ref{req:opening}, REQ\,\#\ref{req:cutoff-freq}
\end{tabularx}


\begin{figure}[tbp]
	\center
	\includegraphics[width = \linewidth]{diagrams/sample-001}
	\caption[Infrasonic sample of a slamming door]{Infrasonic sample of a slamming door; sampled by 200\,Hz}
	\label{fig:sample-001}
\end{figure}

An ongoing work need analyze the quality of the data acquisition in detail, as well as the distributed event detection. Both is hard to analyze by a missing infrasonic reference signal and with only one infrasonic sensor node.

\section{Time Component}
\label{sec:qa-time}

\paragraph{Real Time Clock Accuracy}
The three staged GPS synchronization allows a switched off GPS device for long periods ($t_{GPS\_OFF}$). The time period impacts not only the energy consumption but the RTC accuracy. To find the best trade-off between energy consumption and accuracy test series with different $t_{GPS\_OFF}$ values are accomplished. Two types off tests are carried out. The first type measures the absolute deviation according to the UTC time. The second one investigates the relative deviation between two nodes.
\vfill
\refstepcounter{c_test}\label{test:deviation-1}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:deviation-1}\\
	Description      & The absolute deviation of the RTC to the UTC time is measured.\\
	Fit criterion for& REQ\,\#\ref{req:time-accuracy}
\end{tabularx}
\vfill
The breadboard is simply made of a single sensor node equipped with a GPS device. It is configured to start the stage one synchronization immediately, stage two after ten minutes, and stage three after further ten minutes. Firstly now, the time period $t_{GPS\_OFF}$ is used. Hence, the GPS device needs between 45\,s and about 165\,s to provide a valid pps signal, the synchronization period is  $t_{GPS\_OFF}$ plus the fluctuating GPS startup time.

\begin{figure}[tbp]
	\center
	\includegraphics[width = \linewidth]{diagrams/deviation_aclk_004}
	\caption[15 hour test result of the RTC deviation]{15 hour test result of the RTC deviation; three staged GPS synchronization happened about every 45\,minutes}
	\label{fig:qa-aclk-dev}
\end{figure}

Figure~\ref{fig:qa-aclk-dev} shows the result for $t_{GPS\_OFF}= 45$\,min. the test runs more than 15 hours. One can extract different assumptions by interpreting the responses. The first fact is, that within the first three hours the synchronization is unacceptable. The reason could be the temperature difference between the office and outside, thus the node needs some time to acclimatize to the outside temperature. Furthermore, the outside temperature declined in the evening hours. 

\begin{figure}[tbp]
	\center
	\includegraphics[width = \linewidth]{diagrams/deviation_aclk_005}
	\caption[Comparison of the predicted frequency response]{Comparison of the predicted frequency response (green) by Time component and theoretical tangential response (red)}
	\label{fig:qa-aclk-grad}
\end{figure}

The elimination of the software bug (see page~\pageref{werdasliestistdoof}) caused by a roundoff is  empirical verified and shown in figure~\ref{fig:qa-aclk-grad}. The theoretical tangential and the predicted response are congruent.

Incontestable is the dependence of the deviation between the expected and the real frequency response to the RTC deviation. A bend in the frequency response leads to a deviation. However, the idea to take the second derivation into account, would worsen the prediction of the frequency. One can appreciate this, if the expected frequency response is imaginary extended by the turns of the real curve. The mathematically unpredictable bends can maybe be physically predictable, if for instance the temperature is monitored. A future investigation of dependence of the frequency and temperature could pay off.

%However, the maximum deviation for the following response lays by 0.49\,ms. This value is close to the allowed error caused by an unexpected bend of $f_{ACLK}$(between UTC:19:10 and 19:40). 

Summarized, 33.7\% of the time the RTC exceeds the allowed limit (ignoring the first three hours, 22.1\%).

For the experimental assurance of the desired accuracy the time $t_{GPS\_OFF}$ needs to be defined according to the highest gradient, what is = 648\,s (eq.~\ref{keinbockmehr}), ignoring the first three hour.

\begin{equation}
\label{keinbockmehr}
t_{GPS\_OFF} = 0.5\,ms/\vert gradient_{max}\vert = 648\,s
\end{equation}

The average gradient (eq.~\ref{eq:qa-time-avg-grad}) of the deviation takes both into account: long periods of small deviations and the percentage part of peeks of high deviations. A fixing of the time $t_{GPS\_OFF}$ according to the average gradient (eq.~\ref{eq:qa-time-gps-off}) allows an exceeding of the allowed deviation for a small time. For the average gradient without the first hour the result is $t_{GPS\_OFF} = 2150\,s$.

Very optimistic but energy conserving is to take only 75\% of the smallest gradients into account. The resulting value according to the average gradient of 75\% of the best values is $t_{GPS\_OFF} = 4168\,s$.

\begin{equation}
\label{eq:qa-time-avg-grad}
	\overline{gradient} = \frac{\sum_{min<i\leq max} \frac{\vert deviation_i\vert }{\Delta t_i} }{max - min}
\end{equation} 

\begin{equation}
\label{eq:qa-time-gps-off}
	t_{GPS\_OFF} = \frac{0.5\,ms}{\overline{gradient}} 
\end{equation} 

An adjustment every 30\,minutes (fig.~\ref{fig:qa-aclk-dev2}) doesn't show a considerable improvement. It needs two hours to acclimatize and to calculate the gradient to the realistic temperature response. After the first two hours the time of exceeding the limit deviation of 0.5\,ms is about 19.9\%. A shorter time period for the GPS adjustment doesn't solve sufficiently the deviation problem caused by the turns in the frequency response.

For a complete evaluation of the measurement the knowledge of an outstanding fact is required. The pink cycles mark long time periods of a deficient GPS signal, which lasts respectively about an hour. Both GPS timeouts were caused by rain. Rain impedes the synchronization process twice. On the one hand, it induces GPS timeouts and on the other hand, it comes together with an increasing of temperature, which itself speeds up the ACLK. Nevertheless, excluding these values would result a violation of the accuracy requirements for about 12\% of the time.

\begin{figure}[tb]
	\center
	\includegraphics[width = \linewidth]{diagrams/deviation_aclk_006}
	\caption[18 hour test result of the RTC deviation]{18 hour test result of the RTC deviation; three staged GPS synchronization happened about every 30\,minutes; pink cycles - GPS timeout caused by rain}
	\label{fig:qa-aclk-dev2}
\end{figure}

\paragraph{Software Timer}
The white box or glass box software-test is a empirical test series to assure the mandatory behavior with the knowledge of the internal functionality. An ideal white box test passes each source code line at least once. The automatic tests are implemented by a test application. This application is developed and executed coevally with the component. So, the components quality level is increasing monotonically.
\vfill
\refstepcounter{c_test}\label{test:timer}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:timer}\\
	Description      & White box software-test assures stability and mandatory behavior of the software timer.\\
	Fit criterion for& GOAL\,\#\ref{goal:reliable}\\
\end{tabularx}
\vfill
The tests application \texttt{timer\_test} covers the following functionalities:

\begin{compactitem}
	\item	\texttt{addTimer()}
	\item Arithmetics for time structures.
	\begin{compactitem}
		\item Two times are added with and without passing midnight.
		\item One time value is increased with and without passing midnight.
		\item The handling of different units (seconds or ACLK ticks) for equivalent values is compared.  
	\end{compactitem}
	\begin{compactitem}
		\item Timers are added in a random order.
		\item The execution order and time is watched.
		\item The shortest timer already scheduled by the hardware timer is displaced by a shorter one, i.e. it is moved from the first to the second position in the software timer list.
		\item Timers with an identical execution time are added.
		\item More timers than supported are scheduled, i.e. an error is produced.
	\end{compactitem}
	\item \texttt{removeTimer()}
		\begin{compactitem}
		\item The last, the first, and a median timer are removed.
		\item An non existing timer is removed, i.e. an error is produced.
		\item A timer already scheduled by the hardware timer is removed.
		\item The execution order and time of the remaining timers is watched.
	\end{compactitem}
\end{compactitem}



\section{Network Component}
\label{sec:qa-network}
The tests presented in this section reach from functionality tests, done by white box tests, to realistic test scenarios, especially for the network layer. 

\paragraph{The Data Link Layer}
Again, the goal is to execute every line of the source code of the class \texttt{DataLinkController}. Therefore, two applications are required: the \emph{test\_dll\_server} and  \emph{test\_dll\_client} application. 
\vfill
\refstepcounter{c_test}\label{test:dll}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:dll}\\
	Description      & White box software-test assures stability and mandatory behavior of the data link layer.\\
	Fit criterion for& GOAL\,\#\ref{goal:reliable}\\
\end{tabularx}
\vfill
The server application is listening and reacting to client messages and packets. The time slots are coordinated by the client, i.e. a simple command brings the server into a specific mode. The VDM component is used with four blocks, i.e. 16 data packets. Furthermore, different time measurements of the radio activity are done. All transmitted, received messages and even entire data blocks are written to the serial port, so a manual tracing is possible. Time measurements for transmissions in \emph{burst} mode, i.e. transmitting all blocks contained in the memory in on go, span the period of the first transmitted byte of the first block as well as the last byte of the last block. Thereby the time consumed by the CPU is taken into account. The simulation covers following issues:  

\begin{compactitem}
	\item	Transmission of messages in all possible modes.
	\item Transmission of different sized blocks on client side - a total of ten packets:
		\begin{compactitem}
		\item one block with one packet,
		\item one block with two packets,
		\item one block with three packets,
		\item one completely filled block.
	\end{compactitem}
	\item On server side the ten packets are received by the initial block configuration:
		\begin{compactitem}
		\item one empty block,
		\item one block with one packet,
		\item one block with two packets,
		\item one block with three packets.
	\end{compactitem}
	\item The server transmits all 16 packets.
	\item The client receives the 16 packets but drops the already known packets.
	\item Time measurements are done for:
		\begin{compactitem}
		\item transmission of each dialog messages or packet,
		\item reception of each dialog messages or packet,
		\item transmission of frames in burst mode,
		\item reception of frames in burst mode.
	\end{compactitem}
\end{compactitem}

\paragraph{Results}
By executing the tests, different software faults were detected. One example is a casual wrong memory addressing within a frame structure. This behavior is reasoned by the following circumstance: A frame structure contains 8\,bit and 16\,bit fields. To avoid an  8\,bit padding after an 8\,bit field and so to avoid the wasting of bytes the compiler attribute \texttt{packing} is used for those structures. The addressing fault happened in the case, when a 16\,bit field thus had an odd address. A rearrangement of the frame structure doesn't suffice, hence an odd block size (512\,bytes plus management overhead) could lead to odd 16\,bit field addresses, too. This error was hard to find, because the block management overhead and with this the block size varies depending to a present SD card.

A further problem was the constant lost of the last two bytes. To solve it I used a special two byte \emph{epilogue} field as recommended  by the ScatterWeb library. 

\begin{table}[tbp]
	\center
	\begin{tabularx}{0.85\linewidth}{lXc}
	mode & operation & data rate [kbit/s]\\
		\toprule
\rowcolor{light_blue}
	DIALOG & send dialog message& 10.6\\
	DIALOG & send dialog messages in burst mode & 8.87\\
\rowcolor{light_blue}
	DIALOG & receive dialog message& 10.5\\
	TXDATA & send data packet & 9.75\\
\rowcolor{light_blue}
	TXDATA & send new data packets in burst mode & 9.29\\
	TXDATA & forward data packets in burst mode & 9.42\\
\rowcolor{light_blue}
	RXDATA & receive data packet & 9.68\\
	RXDATA & receive new data packets in burst mode & 8.73\\
\bottomrule
	\end{tabularx}
	\caption{Average data rates of the data link layer}
	\label{tab:radio-time}
\end{table}

A software initiated capture of the ACLK is used to measure the transmission and reception durations. Two different types of times are measured to determine the data rates.

The first type measures the data rate provided by the radio controller. In order to measure the time needed for transmitting the time of the occupation of the DMA controller is measured. Whereby ,the DMA controller is clocked by the radio controller. The reception time is measured between the first received byte after the preamble and the last byte of the frame (without the two epilogue bytes).

The second type is of higher interest. It is the average time of transmitting a couple of frames in one \emph{burst}. In other words, the data rates for the burst mode is both, the radio data rate plus the delay caused by the software and the radio strokes. It is exactly the time between switching into a new mode and the transmission or reception of the last byte of the last scheduled frame. The transmission of a new data packet takes more time than retransmitting or forwarding a packet, because the CRC checksum needs to be computed first and the preamble and epilogue must be added to the frame structure. 

The resulting data rates of the data link layer according to measurements of the transmission and reception duration you find in table~\ref{tab:radio-time}.

\paragraph{The Network Layer}
The issues for the network layers are the capacity, the reliability, and the robustness. Two types of test scenarios were accomplished: the transmission of a continuous data stream and a sparse transmission of data records over a long time period. 

For both tests the command \texttt{cost} was implemented on the gateway node in order to simulate a cost distribution. The first parameter of the command address a node. The second one specifies the desired cost of the node. By receiving a dialog message containing the command the destination node adjusts its cost accordingly. While the test application is running, the node ignores messages transmitted by nodes whose cost difference is greater than one.
 
Unfortunately, a second node equipped with a second GPS device was not timely available. Due to the fact that the network time slots are seeded by time synchronized nodes, the tests can only be done with a single data source. However, the intermediate nodes don't care for the packet originator and multiple data sources are not essentially required to test the functionality. 
\vfill
\refstepcounter{c_test}\label{test:net1}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:net1}\\
	Description      & A continuous data stream of records is routed through a multi-hop network. The data volume is as big as possible.\\
	Fit criterion for& REQ\,\#\ref{req:capacity}, GOAL\,\#\ref{goal:reliable}, REQ\,\#\ref{goal:fault-tolerance}\\
\end{tabularx}
\vfill
Four different configurations for the breadboard were realized. Different network depths and different numbers of nodes on the same level were tested. Of course the data sink was always on level zero. The data rates are measured by the GatewayClient.

\begin{table}[tbp]
	\center
	\begin{tabularx}{\linewidth}{XXlll}
	data rate[kbit/s] & volume[kB] & cost level 1 & cost level 2 & cost level 3\\
		\toprule
\rowcolor{light_blue}
2.35 & 530 & one data source & & \\
2.26	& 502 & one intermediate & one data source& \\
\rowcolor{light_blue}
1.37	& 292 & two intermediates& one data source& \\
2.09	& 524 & one intermediate&one intermediate & one data source\\
\bottomrule
	\end{tabularx}
	\caption{Test results of routing a continuous data stream.}
	\label{tab:net1}
\end{table}

Table~\ref{tab:net1} shows the data rates measured for the different network topologies. The packet loss rate (not listed in the table) for all topologies was 0\%. The rate for the two node topology (line one in the table) complies exactly the theoretical value of equation~\ref{eq:data_rate}. The conspicuous data rate is the measured value for two intermediates on the same cost level. Obviously, the small value is reasoned by collisions. This indicates a to small value for the used backoff delay. Anyhow, the results still fulfill the requirements. 
\vfill
\refstepcounter{c_test}\label{test:net2}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:net2}\\
	Description      & Single data records are routed through a multi-hop network in respect to periods of no network traffic.\\
	Fit criterion for& GOAL\,\#\ref{goal:reliable}, GOAL\,\#\ref{goal:fault-tolerance}\\
\end{tabularx}
\vfill
By the following test case the reliability of the network synchronizations is measured. During the periods of no network traffic, no synchronization of the time slots happens. The network topology was linear with a depth of three, i.e. the data source was on cost level three. The size of the records was randomly created.

\begin{table}[tbp]
	\center
	\begin{tabular*}{0.84\linewidth}{llll}
	test duration[min] &  record separation[min] & volume[kB] & records\\
		\toprule
\rowcolor{light_blue}
124 & 15& 51 & 8 \\
185	& 30& 33 & 7 \\
\bottomrule
	\end{tabular*}
	\caption{Test results of routing periodically single records.}
	\label{tab:net2}
\end{table}

A delay of 15 minutes between the record transmissions results a stable time slot behavior of the network (tab.~\ref{tab:net2}, line 0ne). However, for the delay of 30 minutes the networks got asynchronous after the 7th record. Another test with a delay of one hour (not listed) failed after the first record. It implies, that the time slot deviation after 30 minutes can exceed at least the duration of the half of the ADV slot duration, i.e. 94\,ms or 3072 ACLK\,ticks. In other words, if the crystal oscillators of the intermediates deviates about 3.4 ticks per second, the ADV slot duration is exceeded after 30 minutes (cf.~sec.~\ref{sec:qa-time}).

In order to increase the allowed deviation the ADV slots could simply be extended. Even, periodical transmissions of ADV messages would synchronize the nodes again. However, both approaches would increase the network energy consumption.

Another solution is the flooding of the measured frequency and its expected trend by the sensor nodes immediately after a sensor node synchronized to the GPS device. Therefore, it is assumed, that the oscillators of all nodes work under the same conditions.

Considering very long time periods without any network traffic, following strategy is promising: If the nodes got asynchronous it firstly matters in the case of transmission purposes. To synchronize again the acting node transmits its slot time multiple times with a delay, which assures that the message is received in at least one ADV slot by a child node. A delay of about the half of the ADV slot duration would suit. The acting node is \emph{scanning} the time slots.

Notwithstanding the unsatisfying result, the implemented D3 routing protocol works in principal. A detailed analyze and thereby the adjustment of the protocol parameters is the direction of future work, as well as large scale tests. 


\paragraph{The Transport Layer}
The infrasonic records are transported in segments which are reassembled by the GatewayClient. The functionality and reliability is tested by the following test case. 

\refstepcounter{c_test}\label{test:trans}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:trans}\\
	Description      & The fragmentation and reassembling of a continuous data stream routed through a multi-hop network is verified.\\
	Fit criterion for& GOAL\,\#\ref{goal:reliable}, GOAL\,\#\ref{goal:fault-tolerance}\\
\end{tabularx}

The test case was accomplished together with the test case~\ref{test:net1}. Into the randomly generated records following information are written by the data source: the beginning of a segment, the end of a segment, and the segment number starting by zero for each record. The files of the records created by the GatewayClient are verified according to this informations. Especially the written segment order is inspected. 

In this way the correctness of the record reassembling is assured. In very seldom cases, segments were missing. The analyze of the gateway node segment confirmations showed that all missing segments were correctly received by the gateway node. The reason for the loss of the segments is a seldom incorrect communication over the serial port. It is not sure on which side the problem is caused. Nevertheless, it is recommended to extend the confirmation procedure to the GatewayClient. If entire packets are forwarded to the GatewayClient, the CRC can be used to detect incorrectly forwarded segments.


\section{Virtual Data Component}
\label{sec:qa-vdm}
\vfill
\refstepcounter{c_test}\label{test:vdm}
\begin{tabularx}{\linewidth}{l X}
	Test case        & \#\,\ref{test:vdm}\\
	Description      & White box software-test assures stability and mandatory behavior of the VDM component.\\
	Fit criterion for& GOAL\,\#\ref{goal:reliable}\\
\end{tabularx}
\vfill
The white box tests are implemented by the \texttt{vdm\_test} application. Furthermore, after each VDM operation the VDM invariant is tested, which assures, that each block is either allocated by the application or in one of the used stacks and queue. The successfully accomplished tests covers the following issues:

\begin{compactitem}
	\item \texttt{allocate()} under different conditions
	\begin{compactitem}
		\item sufficient/deficient RAM
		\item sufficient/deficient SD free space
	\end{compactitem}
	\item \texttt{pagein()}, \texttt{pageout()} 
	\begin{compactitem}
		\item by SD card I/O
		\item by moving read head paging optimization 
		\item by moving write head paging optimization 
	\end{compactitem}
	\item \texttt{requestBlock()} 
	\begin{compactitem}
		\item sufficient/deficient RAM
		\item of different block sizes
	\end{compactitem}
	\item iterator functions \texttt{hasNext()}, \texttt{next()}, and \texttt{moveFirst()}
	\begin{compactitem}
		\item expected order
		\item different block size conditions
	\end{compactitem}
	\item history functions \texttt{confirm()} and \texttt{segmentState()} for
	\begin{compactitem}
		\item locked entries, i.e. history entries for segments which are currently paged in
		\item unknown entries
		\item paged entries
		\item paged and overwritten entries
		\item buffered entries
		\item confirmed entries
	\end{compactitem}
	\item entire SD card ring size I/O
	\begin{compactitem}
		\item writing exactly from the ring start to the ring end
		\item writing from ring \emph{middle} to ring \emph{middle}
		\item verifying paged in segment order of an entire SD card ring stream
	\end{compactitem}
\end{compactitem}

Hence, the tests are an implemented application, the tests can be easily accomplished at any time, for instance after a source code modification of the VDM component.

\section{Requirements Traceability Matrix}
\label{sec:qa-traceability}
The test cases mapped to the non functional requirements in view of the results is shown table~\ref{tab:traceability2}. Missing test cases can not directly be mapped.

\noindent
\begin{table}[tbp]
\centering
\begin{tabular*}{0.6\linewidth}{r|ccccccc}
	& \rotatebox{90}{TEST\,\#\ref{test:filter}} 
	& \rotatebox{90}{TEST\,\#\ref{test:acqui}} 
	& \rotatebox{90}{TEST\,\#\ref{test:deviation-1}} 
	& \rotatebox{90}{TEST\,\#\ref{test:dll}} 
	& \rotatebox{90}{TEST\,\#\ref{test:net1}} 
	& \rotatebox{90}{TEST\,\#\ref{test:net2}} 
	& \rotatebox{90}{TEST\,\#\ref{test:trans}}  \\
\midrule
\rowcolor{light_green}
	 REQ\,\#\ref{req:cutoff-freq} 						&\Checkmark &\Checkmark & & & & &  \\
\rowcolor{light_green}
	 REQ\,\#\ref{req:time-accuracy} 						& & &\Checkmark & & & &  \\
	 REQ\,\#\ref{req:spatial-accuracy}					& & & & & & &  \\
\rowcolor{light_green}
	 REQ\,\#\ref{req:adc}			 						& &\Checkmark & & & & &  \\
	 REQ\,\#\ref{req:sensitivity} 						& & & & & & &  \\
\rowcolor{light_green}
	 REQ\,\#\ref{req:opening}		 						& &\Checkmark & & & & &  \\
\rowcolor{light_orange}	 
	REQ\,\#\ref{req:availability} 						& & & & & &\XSolidBrush &  \\
\rowcolor{light_green}	 
	REQ\,\#\ref{req:reliability} 						& & & & &\Checkmark & &(\Checkmark)\\
	 REQ\,\#\ref{req:robustness}	 						& & & & & & &  \\
\rowcolor{light_green}
	 REQ\,\#\ref{req:capacity}		 						& & & &\Checkmark &\Checkmark & &  \\
\rowcolor{white}
	 REQ\,\#\ref{req:weatherproof} 					& & & & & & &  \\
	 REQ\,\#\ref{req:installation} 						& & & & & & &  \\
	 REQ\,\#\ref{req:administration}						& & & & & & &  \\
\rowcolor{white}
	 REQ\,\#\ref{req:tampering}	 						& & & & & & &  \\
\bottomrule
\end{tabular*}
\caption[Requirements Traceability Matrix]{Requirements Traceability Matrix; green  - succeeded; orange - failed; white - exceeding scope of the project}
\label{tab:traceability2}
\end{table}

The availability requirement REQ\,\#\ref{req:availability} is violated by the result of the network long time test case TEST\,\#\ref{test:net2}. Promising suggestions were discussed to solve the issue. However, for a adequate availability test, a long time test under realistic circumstances needs to be accomplished. In view of the extensiveness of the project, this test could not be done.

Relying upon the GPS devices, the spatial position accuracy REQ\,\#\ref{req:spatial-accuracy} is not tested. To assure the event detection sensetivity requirement REQ\,\#\ref{req:sensitivity} more infrasonic sensor nodes are required. 

The robustness requirement REQ\,\#\ref{req:robustness} assurance concerns the Network and EnergyManagement component. Caused by the extensiveness of the project, a large scale test with simulated node failures could not accomplished. All other white rows within the tracebility matrix exceed the scope of the thesis.	 	  
