\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\usepackage{listings}
\lstset{ %
language=Java,                % choose the language of the code
basicstyle=\footnotesize,       % the size of the fonts that are used for the code
numbers=left,                   % where to put the line-numbers
numberstyle=\footnotesize,      % the size of the fonts that are used for the line-numbers
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=single,           % adds a frame around the code
tabsize=2,          % sets default tabsize to 2 spaces
captionpos=b,           % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={\%*}{*)}          % if you want to add a comment within your code
}
\renewcommand{\lstlistingname}{Code Sample}

\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Implementation}

This chapter describes how I implemented each part of the project and how it can
be used by the end user. It also explains how I arrived at the decision to use
specific sample rates and lengths of encoded bytes.

\section{Encoding}

The coding scheme I decided on was \emph{Frequency Shift Keying (FSK)}. This was
largely due to the fact that both amplitude and phase techniques are best suited
to binary encoding, which would slow down transfer rates. As I described in
Chapter 2, FSK can easily accommodate 2-bit or 8-bit encoding with little impact
on performance.

I decided to use the bit structure of the file as a reference for which
frequencies should be encoded. This is simpler and less likely to result in code
errors than heuristic encoding schemes which could predict the bit pattern of
later bytes, or include self references to earlier portions of the sound when
bytes are repeated. The bytes comprising the file can be accessed using a
FileInputStream and an existing method in the \emph{Apache Commons} library
\emph{IOUtils} called \emph{toByteArray}, which is applied directly to the
FileInputStream.

\subsection{Fault tolerance}

Once the bit structure of the file has been accessed in a byte array it is
necessary to map these bytes to frequencies for playback as a sound. It is
important to remember that Java has no unsigned types, so any of the bytes in
the array with a 1 in the most significant bit will be a negative number. This
means a na�ve mapping of multiplying the bytes by a suitable factor will result
in negative frequencies, which will not work. Therefore, before mapping bytes to
frequencies the signed bytes need to be converted to unsigned, using a larger
type.

\emph{OLLIE: SHOULD THIS BE MOVED TO EVALUATION? THEN THE FIRST PARA CAN GO IN
THE JAVA LIBRARIES SUBSECTION?}
A further complication arises from the prevalence of
low frequencies in everyday life. Figure~\ref{fig:lowfreq} shows the microphone
input from listening to a tone of 700Hz. 

\begin{figure}[t]
\includegraphics[width=\textwidth]{lowfreq.png}
\caption{Background noise means low frequencies are almost always detected
alongside the desired frequency. To counter this frequencies below 300Hz should
be ignored.}
\label{fig:lowfreq}
\end{figure}

Background noise such as computer fans, wind and distant
conversations cause a rise in the amount of lower frequencies detected. To
counter this, no byte value should be mapped to a frequency below 300Hz. This
can be achieved by shifting every frequency up by that amount. Furthermore, as
freqencies very close together are almost indistinguishable, it is not possible
to simply map each byte value from 0 to 255 to the frequency of the same number
(offset by 300). A multiplying factor is required to space out the frequencies
and allow for some errors in transmission. A spacing of 30Hz per frequency is
enough at a reasonable volume, I will discuss larger and smaller spacings in the
Evaluation. As the y-axis of this graph shows
amplitude, one way to increase accuracy is to turn up the volume, making the
desired frequency simply the most prominant one measured. This should only be
used as a last resort, however, as if the feature is too intrusive in everyday
life there will be less insentive for developers to use it.
Figure~\ref{fig:fault_tolerance} shows how the frequency spacing technique also
allows for some simple error correction.

\begin{figure}[t]
\includegraphics[width=\textwidth]{errorcorrection.png}
\caption{The frequency detected may not be exactly the one transmitted. By
spacing the frequencies out, the receiving end can make a best estimate of what
the original frequency was meant to be, as the received frequency is likely to
be closest to what it was meant to be. In this example, a detected frequency of
680 is closer to 690 than 660 so will be corrected to 690.}
\label{fig:fault_tolerance}
\end{figure}

Using the numbers I have discussed up to this point, byte values for the raw
file in the range -128 to 127 will be mapped onto frequencies 300 to 7950Hz, at
30Hz intervals. This means I can use a 16000Hz sample rate and safely encode and
decode the sounds according to the \emph{Nyquist-Shannon} sampling theorem.

\subsection{Java libraries}

In Java a sine wave can be represented by a byte array, with each array element
representing a sampled value of the function. This is achieved using the Java
\emph{Math} library, shown in Code Sample~\ref{lst:sineWave}, when given
integers \emph{freq}, \emph{sampleRate} and \emph{time}.

\begin{lstlisting}[caption={Encoding a sine wave in a Java byte array. The sine
value is multiplied by 127 to change the range of the result from the usual
(-1,1) to (-127,127), utilising all the possible values of a
byte.},label={lst:sineWave}]
byte[] sinewave = new byte[sampleRate*time];
for (int i=0; i<sineWave.length; i++) {
     double angle = (i * freq * 2.0 * Math.PI) / sampleRate;
     sineWave[i] = (byte)(Math.sin(angle) * 127.0);
}
\end{lstlisting}

This code exists in a method to convert an integer value to the
corresponding sound for that frequency. Each of these byte arrays can be stored
in an array of byte arrays and then output one after the other by iterating over
it. To output the sound on a computer system, as this technique is not limited
to phone-phone communication, a SourceDataLine can be obtained for the computer
speakers using the libary AudioSystem\footnotemark. The SourceDataLine write
method can then be used to play the byte array directly, as the processing on
the array up to this point means it is in PCM format (for .wav files), which is
supported by the library. It is important to remember to call the drain method
before closing the SourceDataLine, in the same way you would call flush for an
output stream. Without this call the sound will not play for the full duration
of the array and data will be lost.

\footnotetext{Using the method AudioSystem.getSourceDataLine(AudioFormat),
where the AudioFormat has been initialised with the required settings for data
capture, e.g. use 8 bits per sample as we are dealing with bytes.}

\subsection{Android libraries}

Android does not support the AudioSystem library, but has similar functionality
in AudioTrack. Previously the AudioFormat input to the SourceDataLine was
initialised with information on the sample rate, bits per sample, etc.
Correspondingly, an AudioTrack is initialised with the mostly the same
arguments using class-specific constants, and afterwards the code to play the
sound on a mobile device is almost identical but with different method names, as
shown in Code Sample~\ref{lst:play_audio}.

\begin{lstlisting}[caption={The differences between Java and Android access to
real time audio output.},label={lst:play_audio}]
//For Java
sdl.open(AudioFormat);
sdl.start();
sdl.write(array,0,arrayLength);
sdl.drain();
sdl.close();

//For Android
audioTrack.play();
audioTrack.write(array,0,arrayLength);
audioTrack.stop();
audioTrack.release();
\end{lstlisting}

The way Android is written also means there are additional tasks to complete
such as allowing the program to access the speaker separately in the Manifest
file, but that does not impact the implementation of the audio output itself.

\section{Decoding}

I decided to use a Fourier Transform to determine the frequency or frequencies
present rather than the counting the solution to $f(x)=0$ as counting the
solutions will not allow for multiple frequencies to be decoded at once, which
may be implemented to improve bandwidth, and the background noise could have a
very unpredictable effect on the shape of the incoming wave. There are also
already libraries which implement various types of transform so there will be no
need to repeat work which has already been done and thoroughly tested.

\subsection{Microphone input}

To access the microphone on a mobile device using Android the
necessary permissions must be added to the manifest\footnotemark. The code
itself then uses the class AudioRecord to gather the data from the microphone
and a ByteArrayOutputStream to store it until the transmission ends. When
initialising the AudioRecord object it is possible to use the exact same
arguments as those used when initialising the AudioTrack object for playback, as
the sound being played should be in the same format once it is being decoded.

The read method of AudioRecord returns an integer stating how many bytes were
read. I have set the number of bytes to be read as the maximum number it can
read without blocking. This method can therefore be used inside a while loop to
continue reading data from the microphone until the total number of bytes read
reaches a certain value. This will be useful in audio QR codes as the maximum
size could be determined in advance. Alternatively the while-loop condition
could be a flag which is changed when the decoding portion of the code detects
that the transmission has stopped. As with the AudioTrack object it is important
to call stop and release once the microphone is no longer needed. The
ByteArrayOutputStream can then be converted to a byte array and returned for
analysis in a Fourier Transform.

\footnotetext{<uses-permission android:name="android.permission.RECORD\textunderscore AUDIO"
/>}

\subsection{Apache Fast Fourier Transform}

The Apache libaries are located at \emph{http://commons.apache.org/}, and the
Fast Fourier Transform I used is stored under
\emph{org.apache.commons.math3.transform}. Using an existing library is more
reliable than implementing one of my own because it has been extensively tested
by a large number of people instead of just being tested by me. It also comes
with various options for the different transforms. For example, there are
separate methods already written to either transform the data and return
a new array, or to perform the transformation in place to save memory,
which is useful considering the current implementation is using 16000
bytes per byte of data encoded. It also can distinguish between a forward or
inverse transform, which for the implementation I have used is not that
important, but could be useful if I need to implement other functionality. It
also has its own API already written which will help future developers alter
this project if necessary.

The FFT returns an array of type Complex, redefined by the Apache Commons
library. It is a datatype which stores two values, the real and complex portions
of the number, and the class offers methods to retrieve these parts separately.
Each small timespan of the sound is sent to the FFT individually to determine
which frequencies are present in that 64 millisecond burst. Once the complex
array is returned a new array is defined of type Double, which is populated by
taking the square root of the real value squared plus the imaginary value
squared for each array element. This is to find the true amplitude of the sound
measured at each frequency. Then, by iterating over the array, the frequency can
be determined by multiplying the array index containing the largest by the
sample rate and dividing by twice the total size of the array. You need to
divide by twice the array size as the FFT returns both positive and negative
frequencies in the range, which are mirrored about 0 so -400Hz has the same
amplitude as 400Hz. Rounding to the nearest expected frequency (multiples of 30)
will return the intended frequency.

\subsection{Test harness}

To test the encoding and decoding I \emph{TODO}

\section{Design considerations}

This project is like a pseudo-library, or a module to be dropped in elsewhere,
rather than a standalone product. It is meant to be used inside an app that uses
this feature to do something, such as an app that creates audio-based QR codes.
With that in mind there were various considerations in the design that affected
how the functionality is accessed.

\subsection{Black box design}

A developer using this project should not have to be intimately familiar with
the inner workings, nor indeed the theory behind how it works. Therefore, the
interaction with the program takes the form of input and output using sockets.
The file to be encoded is sent to a socket and retrieved by my classes. The
sound is then played by the speaker from the inside of the black box without any
further user interaction. When decoding, the socket input is the microphone, and
the decoded byte array should be returned to the user. Figure~\ref{fig:blackbox}
shows how an application created by the end user can utilise the socket
interface without needing to interact with the hardware at all.

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\subsection{End user}

Because I am not creating the end app to use this technology I can leave certain
features to other developers or myself in the future. For example, more advanced
error correction which guesses the errors and changes them can be implemented
for this in exactly the same way they would be for other transmission mediums.
Likewise, creating a header to tell the decoder what format to save the decoded
file as will be unnecessary as this could be used for more purposes than just
decoding files. This module is simply encoding and decoding bit sequences
irrespective of what they really mean. Future programs, however, could create
such a header containing this information: initialisation tone (to start the
decoding), filename, file type, end-of-header tone, and follow it with the data.

\section{Testing}

A certain amount of testing needed to be done during implementation to find
optimum sample rates, encode lengths, etc. Here I will outline how I
carried this out, and present more detailed results as part of the evaluation.

\subsection{Android application}

To test my module I have created a small Android application to simply send and
receive text files of no more than 128 bytes. Figure~\ref{fig:blackbox} shows
how my app interacts with my project, taking the place of the top ``end user''
section. In Chapter 4 I will present the results for each of the variables I
tested with various values and discuss the trade-offs. To summarise,
\begin{itemize}
  \item{{\bf Enlarging Factor:} multiplying each frequency by 30 is sufficient
  to correct errors at a reasonable sound amplitude}
  \item{{\bf Sample Rate:} given the enlarging factor a minimum sample rate
  of 15900Hz is required, and for the Fast Fourier transform the sample rate must
  be a power of 2 multiplied by 1000. The choices are therefore 16000 and
  32000, I have chosen 16000 to keep processing time and memory space to a
  minimum}
  \item{{\bf Tone length:} for the FFT tone lengths in milliseconds must also
  be a power of 2, I tested 16, 32, 64 and 128. I selected 64msec as it performed
  far more reliably than 32msec in testing.}
  \item{{\bf Per-encode bit length}: the length of each section encoded is
  variable, I tested 1, 2, 4 and 8-bit sequences. 8-bit performed the best.}
\end{itemize}

\subsection{Graphing Tools}

To analyse the data received I used a combination of a Java graphing class and
\emph{TODO}

\section{Extensions}

I have implemented an extension to the core requirements in the project
proposal, allowing simultaneous two-way communication between the mobile
devices. This is useful not only for simultaneous transfer of files, but also as
a method of feedback sent from the receiving phone so concepts such as backoff
signals can be implemented.

To do this I created two threads in the program to run simultaneously, one for
sending data and one for receiving. Both phones now store data from the
microphone as well as the data they have sent out. When analysing the incoming
data the phone will compare what it receives to what it is sending. If they
exactly match then the phone knows that it is the only one sending data and can
ignore it, but still progress through the array in preparation for a collision.
Once a collision occurs and it detects two frequencies it can compare them both
to the array of data it is sending and store the other in an array for foreign
frequencies. In the event both send the same frequency at the same time it
should have a significantly higher amplitude, but can opt to save it in the
array regardless. Once the phone runs out of data it is sending to compare to,
or if it stops detecting collisions as the other phone has stopped transmitting,
the foreign frequencies array can be output to the user.

\end{document}