\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}

\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Evaluation}

This chapter describes in more detail the tests I undertook during
implementation of the project, and outlines how I have altered the code to
improve performance based on these results. I then carry out the same tests
again using the new implementation to measure the performance differences.

\section{Initial implementation tests}

During the implementation various aspects of the system needed to be evaluated
before I could move on. These mostly focussed on finding values for the
variables which worked, such as the sample rate and the amount of time each tone
should play for in order to be reliably decoded.

The testing also revealed an interesting bug when experimenting with different
sound lengths, sample rates and spacing factors. I began tests using a text file
containing the sentence: \emph{The quick brown fox jumps over the lazy dog}.
This contains approximately one sixth of the possible byte values, but almost
half the values most commonly used in QR-code transfers as they are commonly
used in advertising for web addresses. I found that, especially when using
shorter lengths of time to encode the data, the decoded data was frequently off
by 1, for example ``d'' being decoded as ``e''. I initially attributed this to
an insufficient spacing factor, but then saw in my first 10 tests of 16ms
length, 32000Hz sample rate, 30Hz spacing factor, the same characters were being
frequently replaced. These were the strings decoded in those 10 tests:
\begin{itemize}
  \item Vgg sskbk bsowo gow ksoos owgs sgg kb\{w bog
  \item Vgg sskbk bsowo gow ksoos owgs sgg kb\{w bog
  \item Vgf sskbk bsowo gow ksoos owgs sgg kb\{w bog
  \item Vgg sskbk bsowo gow ksoos owgs sgg kb\{w bog
  \item Vgg sskbk bsown gow ksoos owgs sgg kb\{w bog
  \item Vff sskbk bsowo gow ksoos owgs sgg kb\{w bof
  \item Vgg sskck bsown gow ksoos owgs sgg kb\{w bog
  \item Vgg sskbk bsown gow ksoos owgs sgg kb\{w bog
  \item Vgf sskbk bsowo gow ksoos owgs sgg lb\{w bog
  \item Vgg sskbk bsowo gow ksoos owgs sgg kb\{w bof
\end{itemize}

I traced the problem to the output of the Apache FFT, which in this example was
returning the same array index for ``i'', ``j'' and ``k'', to name one of the
overlaps. This is surprising as I would not expect the same array index to be
returned every time for even the same letter, as the array index represents
which frequency bin has the highest amplitude, and the large sampling rate means
several array elements represent one frequency, so any of them could feasibly be
returned depending on the characteristics of that particular instance of the
sound. I would therefore consider this a peculiar bug in the Apache FFT, perhaps
some internal rounding is occuring and skewing the data.
To counter the bug, I altered my implementation so the result returned is not
based on one array index, but rather the 10 array indexes surrounding the
largest amplitude. This is because when comparing characters ``i'' and ``k'',
ASCII 105 and 107 respectively, for ``i'' the lower five array elements
around the high-point contain larger values than the upper five, and vice versa
for ``k''. Using a spread of data from the FFT gives a more reliable
representation of what the frequency actually is. This evaluation uses this
alteration to the implementation in all tests

\subsection{Frequency range}
A complication arises from the prevalence of low frequencies in everyday life.
Figure~\ref{fig:lowfreq} shows the microphone input from listening to a tone of
700Hz.

\begin{figure}[h]
\includegraphics[width=\textwidth]{lowfreq.png}
\caption{Background noise means low frequencies are almost always detected
alongside the desired frequency. To counter this frequencies below 300Hz should
be ignored.}
\label{fig:lowfreq}
\end{figure}

Background noise such as computer fans, wind and distant conversations cause a
rise in the amount of lower frequencies detected. As Figure~\ref{fig:lowfreq}
shows, the majority of these frequencies are below 300Hz. Therefore, to
partly counter background noise, no byte value should be mapped to a frequency
below 300Hz. This can be achieved by shifting every frequency up by that amount.
In the implementation this means the portion of the output array from the
Fourier Transform that concerns frequencies below that level can be deleted
entirely, so when scanning the array for the largest amplitude detected, the
lower frequencies are not considered. Figure~\ref{fig:bkgnoise} shows the new
output from the FFT, which is much less ambiguous.

\begin{figure}[h]
\includegraphics[width=\textwidth]{bkgnoise.png}
\caption{Output of FFT from listening to a 700Hz tone. Ignoring the bottom 300Hz
means the frequency detected is much clearer.}
\label{fig:bkgnoise}
\end{figure}

\subsection{Enlarging factor}

As freqencies very close together are almost indistinguishable, it is not
possible to simply map each byte value from 0 to 255 to the frequency of the same number
(offset by 300). A multiplying factor is required to space out the frequencies
and allow for some errors in transmission. Also, a Fourier Transform does not
return the exact frequency which was transmitted as it returns \emph{frequency
bins} which can contain several frequencies depending on the bin size. As well
as this, background noise and differing amplitudes may alter the perceived
frequency so it is unrealistic to expect the transform to work with perfect
accuracy in all situations. Figure~\ref{fig:fault_tolerance} shows how the
frequency spacing technique also allows for some simple error correction.

\begin{figure}[h]
\includegraphics[width=\textwidth]{errorcorrection.png}
\caption{The frequency detected may not be exactly the one transmitted. By
spacing the frequencies out, the receiving end can make a best estimate of what
the original frequency was meant to be, as the received frequency is likely to
be closest to what it was meant to be. In this example, a detected frequency of
680 is closer to 690 than 660 so will be corrected to 690.}
\label{fig:fault_tolerance}
\end{figure}

A further consideration when testing frequency gap sizes is the sample rate. The
larger the frequency spacing, the higher the largest frequency will be, which
due to the \emph{Nyquist-Shannon Sampling Theorem} means the sample rate may
need to be increased. This means more samples are recorded per second which
increases the total memory requirement for Dolphin and, as I discussed in
Chapter 3, more memory means the garbage collector is more likely to run which
will slow down processing.

\begin{table}[h]
\centering% used for centering table
\begin{tabular}{c c c c c c c c c c c}% centered columns (11 columns)
\hline\hline %inserts double horizontal lines
H & e & l & l & o &  & W & o & r & l & d\\[0.5ex]% inserts table
%heading
\hline % inserts single horizontal line
2460 & 3330 & 3540 & 3540 & 3630 & 1260 & 2910 & 3630 & 3720 & 3540 & 3300 \\
2452 & 3332 & 3548 & 3544 & 3624 & 1260 & 2906 & 3636 & 3718 & 3534 & 2998 \\
[1ex]
% [1ex] adds vertical space
\hline %inserts single line
\end{tabular}
\caption{Decoded frequencies representing ``Hello World''. The first row is the
actual value encoded, the second is what was received by the second device,
rounded to the nearest even integer.}
\label{table:helloworld}
\end{table}

Table~\ref{table:helloworld} shows the numerical output from a Fourier
Transform based on the actual frequencies transmitted. The largest difference is
8 so a rounding to the nearest 30 is a good starting point, allowing for more
inaccuracy with increased background noise or differing quality of device
microphone. Figure~\ref{fig:factor} shows the average difference between the
actual frequency and the decoded frequency for frequency spacings 30 to 60,
using a text file containing the sentence ``The quick brown fox jumps over the
lazy dog!'' as the encoded data. For all spacings above 30, a sample rate of
32000Hz was used as the highest possible frequency was higher than 8000Hz.

\begin{figure}[h]
\includegraphics[width=\textwidth]{spacing.png}
\caption{The average, absolute difference in actual frequency and decoded
frequency when using frequency spacings 30 to 60.}
\label{fig:factor}
\end{figure}

There is a slight increase in the distance from the intended frequency as the
spacing increases. This is likely due to the fact higher frequencies are harder
to distinguish from each other, so as the spacing increases and higher
frequencies are more regularly used, the accuracy decreases. However, all these
average results are within the tolerance of half the spacing value, so would
round to the correct value regardless. Therefore, as lower frequencies are more
comfortable to listen to~\cite{HighPitch}, I use 30Hz spacing.

\subsection{Sample rate}
The previous results indicated a spacing of 30Hz per byte would be sufficient.
This means in order to encode every possible bit pattern of a byte I must use
a range of $255*30Hz$ which is 7650Hz. Given I remove the 0\textendash300Hz
range to counter background noise, the range using the lowest frequencies would
be 300\textendash7950Hz. The \emph{Nyquist-Shannon Sampling Theorem} states that
the sample rate must be at least twice the maximum possible frequency to avoid
aliasing in the decoding. Therefore my sample rate needs to be at least 15900Hz
and there is no point testing sample rates lower than this. To use the
\emph{Apache} Fast Fourier Transform the sample rate must also be a power of 2
multiplied by 1000, as there is a conversion from milliseconds to seconds
between the recording and sending it to the transform which needs to be
corrected, so the only choices between 0 and 44100Hz (the standard sample rate
used for audio encoding) are 16000Hz and 32000Hz.
Figure~\ref{fig:samplerates} shows the results of my tests for
both, with variable encoding lengths.

\begin{figure}[h]
\includegraphics[width=\textwidth]{samplerate.png}
\caption{The percentage of bytes decoded incorrectly under different sample
rates, averaged over 10 tests for each tone length. The blue line, representing
16kHz is clearly inferior to the red 32kHz sample rate in all cases, so the
extra memory requirement for the sake of accuracy is worth it.}
\label{fig:samplerates}
\end{figure}

32000Hz is clearly far superior in every case, so the additional memory
requirement will have to be tolerated. Seeing as the main argument for using a
frequency spacing factor of 30Hz was the reduced memory, it is worth examining
if a 60Hz spacing is more accurate than 30Hz as I am using the extra memory
anyway. Figure~\ref{fig:spacingat32khz} shows this comparison.

\begin{figure}[h]
\includegraphics[width=\textwidth]{samplerate2.png}
\caption{The percentage of bytes decoded incorrectly at a 32kHz sample rate
using different frequency spacings, averaged over 10 tests for each tone length.
The red line is as before, using a 30Hz spacing. The green line shows the
results from a 60Hz spacing, which is worse than the 30Hz spacing at a 16kHz
sample rate.}
\label{fig:spacingat32khz}
\end{figure}

The green line representing 60Hz spacing in Figure~\ref{fig:spacingat32khz} is
worse than than the 30Hz spacing at a 16000Hz sample rate from
Figure~\ref{fig:samplerates}, in the average case for 16ms bursts Dolphin
either incorrectly decoded or omitted 36\% of the data. It is clear that the
sample rate should be 32000Hz with a spacing of 30Hz per encoded frequency.

\subsection{Length}
The length of the tone transmitted has a direct effect on the data transfer rate
of Dolphin. Shorter tones mean higher throughput, whereas longer tones may be
more reliable and accurate. As the tones still need to be powers of 2 for the
sake of the Fourier Transform I test 16, 32, 64 and 128ms. As is clear from
Figure~\ref{fig:samplerates}, the length of the tone clearly has an influence on
the accuracy, but the average case presented in that experiment was influenced
by the occasional poor performance of one transfer. A system which works
perfectly 9 times out of 10 is preferable to one which drops 10\% of the data in
every transmission. Figure~\ref{fig:length} shows how accurate these 4
burst-lengths were over a series of 10 tests each, considering all the test data
rather than an average.

\begin{figure}[h]
\includegraphics[width=\textwidth]{length.png}
\caption{Box plots showing the number of byte errors in the 40 tests of
Dolphin under different tone lengths. 16ms is clearly unsuitable, and 64ms seems
to counterintuitively outperform 128ms.}
\label{fig:length}
\end{figure}

Surprisingly, 128ms was not the most accurate choice. The Fourier Transform
returns sums the amplitudes in each slot of time, so the stronger frequencies
grow faster than the weaker frequencies as larger values are summed on each
pass, which would supposedly mean the more time you analyse the less
ambiguous the results are. However, 64ms performs more reliably, with a smaller
spread of data over the same interquartile range. The median of 128ms is 0
errors, which is better than 64ms, but the median of 64ms is still less than 1
in 50 so is an acceptable choice for the sake of improved data transfer speed.

\section{Performance}
\label{sec:performance}
So far I have tested relatively small files, and have used the same file when
testing the effect changing a variable has to minimise the number of changes
that could contribute to the results. Figure~\ref{fig:perfsmall} shows how the
Dolphin, with the settings described up to this point, performs decoding 10,
small, different files to verify that it can decode files other than the test
file used so far.

\begin{figure}[h]
\includegraphics[width=\textwidth]{perf_small.png}
\caption{A bar chart showing the average number of byte errors detected from
decoding 10 different files twice. In only one of the cases did the file decode
correctly.}
\label{fig:perfsmall}
\end{figure}

Figure~\ref{fig:perflarge} shows the same process for 10 slightly larger files,
each over 500 bytes instead of 50.

\begin{figure}[h]
\includegraphics[width=\textwidth]{perf_large.png}
\caption{A bar chart showing the average number of byte errors detected from
decoding 10 different files twice. They are slightly higher than for the smaller
files on average, likely because the longer the sound the more likely an element
of background noise will change and interfere, such as a mobile phone ringing.
Assuming the background noise remains relatively constant, they are within an
acceptable margin.}
\label{fig:perflarge}
\end{figure}

These results are similar as, provided the level of background noise remains
within a tolerance, the length of the file decoded does not have an impact on
the reliability of decoding. The same cannot be
said of QR codes, as larger files require smaller dots to represent the bits,
which become increasingly susceptible to camera shake. However, though the
results are similar, they show that Dolphin does not have perfect translation. A
redesign is therefore required to improve performance.

\section{New implementation}
As Figure~\ref{fig:length} shows, the results so far have been varied. This
suggests an unreliable transmission system. At this stage it would be ambitious to say the success criteria were
achieved, as successful decoding happens only 10\textendash20\% of the time.
This has prompted me to rethink the protocols involved in decoding the audio.
Figure~\ref{fig:oldimpl} shows how presently the decoding listens for data and
then considers the maximum length of an encoded tone, which may include
frequencies other than the one intended.

\begin{figure}[h]
\includegraphics[width=\textwidth]{oldimpl.png}
\caption{In the old implementation almost every segment of data analysed
contained a small amount of the next segment as well. As this is smaller than
the first its frequency would never be chosen instead of the dominant one, but
it means the desired frequency has a lower total in the FFT and background
noise may be slightly louder. Also, in the worst case the segments each fill
exactly half of the recorded chunk so both the chances of being ignored as
too quiet and the odds of an incorrect decode increases.}
\label{fig:oldimpl}
\end{figure}

It also shows how in the worst case
the recording may have started exactly half way through a frequency pulse which
means every sound sent to the Fourier Transform has two equally strong
frequencies so the probability of getting one byte correct is 50\%. It is
likely that more than one byte will be sent in a transmission so the
probability of the entire file being decoded correctly becomes increasingly
small. Figure~\ref{fig:newimpl} shows my new implementation. Dolphin still scans
the incoming data as before until the first data frequency is detected. Then it
divides each data segment (currently 64ms) in two and measures which is
stronger.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newimpl.png}
\caption{In the new implementation the first two consecutive segments of data
are considered, and as one necessarily must contain nothing but a single
frequency it will be stronger than the other and not be diluted by extraneous
frequencies. After that only the chunks containing one frequency are
considered. Now the ``worst case'' scenario is both contain nothing but the
desired frequency, in which case either the first segment is chosen or one is
minutely stronger than the other.}
\label{fig:newimpl}
\end{figure}

It also shows how one of these two segements is guaranteed to contain
nothing but the desired frequency, and in the rare event that the start of the
recording corresponds exactly with the start of the data sent, both will
contain only the correct frequency. The weaker amplitude is ignored as it
contains other data, and from then on only alternate bytes from the recording
are sent for decoding, meaning every decoded byte is 100\% accurate.

This new implementation may perform better with different variables, as even
though it is receiving 64ms bursts for one byte, it is now only decoding 32ms
of it, which performed poorly under the old system.

\subsection{Sample rate, spacing and length}
I again tested 16, 32 and 64ms, which means the actual length decoded would be
8, 16 and 32ms to see if shorter sequences were just as reliable. I also tested
at 128ms as in one of the tests of 64ms a character was missing, suggesting that
64ms was not enough for perfect transmission every time.
Figure~\ref{fig:newsample} shows the comparison of the new system performance
with the old system at both 16000Hz and 32000Hz for all 4 tone lengths.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newsample.png}
\caption{The red data, representing 32kHz sample rates, still outperforms the
blue 16kHz. In 16ms and 64ms for the new implementation the average number of
errors over 10 tests was 0, meaning there was 100percent data retrieval.}
\label{fig:newsample}
\end{figure}

Clearly the new implementation also works best with a sample rate of 32000Hz, a
frequency spacing of 30 and a tone length of 64ms. Figure~\ref{fig:newboxplot}
shows the number of byte errors detected in the tests, analogous to
Figure~\ref{fig:length}. This figure includes a bar chart as the original box
plot has so much data registering no errors.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newlength.png}
\caption{The results of the tests for the new implementation. Both graphs show
the range of errors across all 40 tests, as the box plots are non-existant for
3 of the variables.}
\label{fig:newboxplot}
\end{figure}

It is unclear why 128ms consistently underperforms using all variations of
the available variables, compared to 64ms, but clearly with 3 errors in 2 of the
tests and at least 1 error in half the tests it is an unreliable variable to
use. Considering the error in one of the 32ms bursts as an outlier, 16, 32 and
64ms bursts now result in 100\% accuracy of decoding.

\subsection{Performance}
I have demonstrated that performance has improved significantly on the smaller
files. For completeness, Figure~\ref{fig:newperf} shows the same test as
Section~\ref{sec:performance} with the new implementation.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newperf_small.png}
\caption{The total number of errors detected in testing the same 10 small files
with the new implementation. There was only 1 error across all 30 cases. The
tests of the larger, 500 byte files resulted in no errors, so this error is
likely due to extraneous factors.}
\label{fig:newperf}
\end{figure}

As expected, file size still does not impact performance and the test of
the larger files resulted in perfect translation in all 30 test cases, so
I have omitted that graph. Dolphin now works to an extent that the original
success criteria of successfully decoding a file has been achieved.

\section{Noise}
Using the values for sample rate and encode length I have derived so far, I can
now experiment with how well Dolphin copes with background noise. To test this I
use a sample of recorded conversation freely available on the
Internet\footnotemark, played while Dolphin is decoding. I use the same sample
of conversation to get more reliably comparable results in repeated tests.

\footnotetext{\emph{http://www.soundjay.com/crowd-talking-1.html}}

\begin{figure}[h]
\includegraphics[width=\textwidth]{noise.png}
\caption{A reasonable level of background conversation causes a small rise in
most frequencies from 0\textendash2000Hz but this is not a significant factor
in what is detected. Due to the multiple different frequencies used in
speech the total noise is distributed, meaning the largest single detected
frequency is still the constant tone of the Dolphin transmission. Note the
lower 300Hz are still ignored.}
\label{fig:backgroundnoise}
\end{figure}

Figure~\ref{fig:backgroundnoise} shows how speech causes small increases in
frequencies between 0 and 2000Hz, but because speech is varied over multiple
frequencies the total amplitude detected in any one frequency is minimal,
meaning the overall impact is negligable. In the event of louder conversations
causing a more significant influence on all the frequencies in the range of
speech Dolphin can still work by increasing the volume of the transmission.
Dolphin cannot be expected to work flawlessly in excessive noise in the same way
QR-codes are not expected to work in total darkness. The method used, be it
visual for QR-codes or audible for Dolphin, should be relatively clear to
justify using one system over the other. Dolphin is not intended to create apps
that replace QR-codes, simply operate in situations where QR-code style
techniques are less suitable, such as over farther distances or for
slightly larger files.


\bibliography{citations}

\end{document}