\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\usepackage{listings}
\lstset{ %
language=Java,                % choose the language of the code
basicstyle=\footnotesize,       % the size of the fonts that are used for the code
numbers=left,                   % where to put the line-numbers
numberstyle=\footnotesize,      % the size of the fonts that are used for the line-numbers
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=single,           % adds a frame around the code
tabsize=2,          % sets default tabsize to 2 spaces
captionpos=b,           % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={\%*}{*)}          % if you want to add a comment within your code
}
\renewcommand{\lstlistingname}{Code Sample}

\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Implementation}

Dolphin is similar to a library, or a pre-made module, to be used by other
applications which require bits to be converted to sound.
Figure~\ref{fig:structure} shows how this module is structured. The two main
sections are encoding bits as sounds and decoding sounds into bits. The encoding
section uses data provided by the user and accesses the speakers directly,
whereas the decoding section accesses the microphone directly and returns a
sequence of bits to the user based on what was recorded. The \texttt{Encode}
class takes a byte array representing a file and returns a byte array
containing samples of a series of sine waves. The \texttt{Decode} class takes a
byte array representing recorded microphone data, performs a conversion and
returns a byte array representing the original bytes encoded. Both then use the
\texttt{Output} class, to play the sound and save the file respectively.

The rest of this chapter outlines the specifics of how the encoding and decoding
is performed.

\begin{figure}[t]
\includegraphics[width=\textwidth]{structure.png}
\caption{A class diagram showing the structure of Dolphin.}
\label{fig:structure}
\end{figure}

\section{Encoding}

As I discussed in Chapter 2, there are many different ways to encode the data
as sound waves. The coding scheme I use is \emph{Frequency Shift Keying (FSK)}.
This was largely due to the fact that both amplitude and phase techniques are best suited
to binary encoding, which would slow down transfer rates. FSK can easily
accommodate n-bit encoding with little impact on performance, as there are
several thousand frequencies in the range of a microphone's detection available
for use.

Every file on a computer can be expressed in binary, regardless of what format
the file is in. I use this bit structure as a reference for which
frequencies should be encoded and encode the file as-is, without any heuristic
analysis. This is simpler than creating different cases for different types of
file, e.g. sending text one letter at a time and images one pixel at a time
would require different implementations and an analysis of what the file
contains beforehand. I considered enhancing the encoding scheme using a
technique similar to \emph{delta frames} in video encoding, in which only the
changes from one frame to the next (or in this case one byte to the next) are
sent. This works for video as sequential frames are often very similar, so
few changes are necessary and the amount of data sent is reduced. However, there
is no such guarantee for the bit-structure of a generic file so its performance
would be varied and ultimately not worth the extra complication.

\section{Libraries}

Using previously developed libraries in software is not only easier than
reimplementing features but is also good professional practise. This is because
it stops code duplication and reduces the risk of code errors as established
libraries will have been extensively tested already.

The bytes comprising the file, to be sent to this method for encoding, can be
accessed using a FileInputStream and an existing method in the \emph{Apache
Commons} library \texttt{IOUtils} called \texttt{toByteArray}, which is applied
directly to the FileInputStream. Once the bit structure of the file has been
accessed in a byte array the bytes are mapped to frequencies for playback as a
sound. As Java has no unsigned types any of the bytes in the array with a 1 in
the most significant bit will be a negative number. This means a na{\"i}ve
mapping of multiplying the bytes by a suitable factor will result in negative
frequencies, which will not work. Therefore, before mapping bytes to frequencies
the signed bytes need to be converted to unsigned, using a larger type such as a
short.

In Java a sine wave can be represented by a byte array, with each array element
representing a sampled value of the function. I sample by using the Java
\emph{Math} library, shown in Code Sample~\ref{lst:sineWave}, when given
integers \texttt{freq}, \texttt{sampleRate} and \texttt{time}.

\begin{lstlisting}[caption={Encoding a sine wave in a Java byte array. The sine
value is multiplied by 127 to change the range of the result from the usual
(-1,1) to (-127,127), utilising a much wider range of
possible byte values.},label={lst:sineWave}]
public byte[] bufferSound(int freq, int time, int sampleRate) {
	byte[] sineWave = new byte[sampleRate*time];
	for (int i=0; i<sineWave.length; i++) {
		double angle = (i * freq * 2.0 * Math.PI) / sampleRate;
		sineWave[i] = (byte)(Math.sin(angle) * 127.0);
	}
	return sineWave;
}
\end{lstlisting}

This code exists in a method to convert an integer value to the
corresponding sound for that frequency. Each of these byte arrays can be stored
in an array of byte arrays and then output one after the other by iterating over
it. To output the sound on a computer system, as this technique is not limited
to phone-phone communication, a \texttt{SourceDataLine} can be obtained for the
computer speakers using the libary \texttt{AudioSystem}.\footnotemark A
\texttt{SourceDataLine} converts the byte data into the actual sound for
playback.
The \texttt{SourceDataLine} write method can then be used to play the byte array
directly, as the processing on the array up to this point means it is in PCM format (for .wav
files), which is supported by the library. It is important to remember to call
the drain method before closing the \texttt{SourceDataLine}, in the same way you
would call flush for an output stream. Without this call the sound will not play for
the full duration of the array and data will be lost.

\footnotetext{Using the method AudioSystem.getSourceDataLine(AudioFormat),
where the AudioFormat has been initialised with the required settings for data
capture, e.g. use 8 bits per sample as we are dealing with bytes.}

Android does not support the \texttt{AudioSystem} library, but has similar
functionality in \texttt{AudioTrack}. Previously, the \texttt{AudioFormat} input
to the \texttt{SourceDataLine} was initialised with information on the sample
rate, bits per sample, etc.
Correspondingly, an \texttt{AudioTrack} is initialised with the mostly the same
arguments using class-specific constants, and afterwards the code to play the
sound on a mobile device is almost identical but with different method names.
Conceptually, both start the object, write to the object, flush the data and
close the object again.

The way Android is written also means there are additional tasks to complete.
One such task is allowing the program to access the speaker separately in the
\emph{manifest} file, using the Android permission system. Permissions are a
security feature declared statically at compile time, designed to both alert
the user that an app has access to extra features on their device and to allow
the scheduler to more easily allocate shared resources. The relevant permission
in this case is \emph{android.permission.MODIFY\_AUDIO\_SETTINGS}.

\section{Decoding}

I use a Fourier Transform to determine the frequency or frequencies
present rather than the counting the solution to $f(x)=0$ as counting the
solutions will not allow for multiple frequencies to be decoded at once, which
may be implemented to improve bandwidth, and the background noise could have a
very unpredictable effect on the shape of the incoming wave. There are libraries
that implement various types of transform so there is no need to rewrite this
effort. For example, Princeton University has written a single class version
which is used in the Data Analysis module of their Computer Science
course.\footnotemark The main weakness of this implementation is that it
reallocates an array for every transform so can is memory inefficient for
very large arrays. \emph{Apache} have written a Fast Fourier Transformer in
their \emph{Commons} library. This does perform the transform in place, so is
much more space efficient. As I am already using the \emph{Apache Commons}
library for processing incoming files this also means I can limit the number of
additional libraries I need to include in the Dolphin code by using the same
one.

\footnotetext{This is available online:
\emph{http://introcs.cs.princeton.edu/java/97data/FFT.java.html}}

\subsection{Microphone input}

To access the microphone on a mobile device using Android the
necessary permissions must be added to the manifest.\footnotemark The code uses
the class \texttt{udioRecord} to gather the data from the microphone and a
\texttt{ByteArrayOutputStream} to store it until the transmission ends. When
initialising the \texttt{AudioRecord} object it is possible to use the exact
same arguments as those used when initialising the \texttt{AudioTrack} object
for playback, as the sound being played should be in the same format once it is
being decoded.

\texttt{AudioRecord.read} returns an integer stating how many
bytes were read. This method can therefore
be used inside a while loop to continue reading data from the microphone until the total number of bytes read
reaches a certain value. This will be useful in audio QR codes as the maximum
size could be determined in advance. Alternatively the while-loop condition
could be a flag which is changed when the decoding portion of the code detects
that the transmission has stopped. As with the \texttt{AudioTrack} object it is
important to call stop and release once the microphone is no longer needed. The
\texttt{ByteArrayOutputStream} can then be converted to a byte array and
returned for analysis in a Fourier Transform.

\footnotetext{<uses-permission android:name="android.permission.RECORD\_AUDIO"
/>}

\subsection{Apache Fast Fourier Transform}

Using the existing \emph{Apache} library\footnotemark is more
reliable than implementing one of my own because it has been tested
by a large number of people, instead of just being tested by me. It also comes
with various options for the different transforms. For example, there are
separate methods already written to either transform the data and return
a new array, or to perform the transformation in place to save memory,
which is useful considering the current implementation uses 16000
bytes per byte of data encoded.\footnotemark It also distinguishes
between a forward or inverse transform, which for the implementation I have used is not
important, but could be useful if I need to implement other functionality. It
also has its own API already written which will help future developers alter
this project if necessary.

\footnotetext{Located at \emph{http://commons.apache.org/}. The FFT is stored
under \emph{org.apache.commons.math3.transform}}
\footnotetext{For test purposes. Experiments to determine optimum sample rates
can be found in Chapter 4.}

The FFT returns an array of type Complex, redefined by the Apache Commons
library. It is a datatype that stores two values, the real and complex portions
of the number, and the class offers methods to retrieve these parts separately.
Each small timespan of the sound is sent to the FFT individually to determine
which frequencies are present in that 64 millisecond burst. Once the complex
array is returned a new array is defined of type Double, which is populated by
taking the square root of the real value squared plus the imaginary value
squared for each array element. This is to find the true amplitude of the sound
measured at each frequency. Then, by iterating over the array, the frequency can
be determined by multiplying the array index containing the largest by the
sample rate and dividing by twice the total size of the array. You need to
divide by twice the array size as the FFT returns both positive and negative
frequencies in the range, which are mirrored about 0 so -400Hz has the same
amplitude as 400Hz. Rounding to the nearest expected frequency (multiples of 30)
will return the intended frequency.

\section{Testing}

A certain amount of testing needed to be done during implementation to find
optimum sample rates, encode lengths, etc. Here I will outline how I
carried this out, and present more detailed results as part of the evaluation
in Chapter 4.

\subsection{Testing Procedure}

Figure~\ref{fig:tests} shows a photograph of the testing environment. The sender
and receiver were set a constant distance apart so the effect of altering a
variable was not falsely assumed to succeed due to other factors, such as the
amplitude being greater (caused by a closer audio source).

\begin{figure}[h]
\includegraphics[width=\textwidth]{testing.png}
\caption{A photo of the testing setup. The phones are always a constant
distance apart to minimize extraneous factors in the initial testing.}
\label{fig:tests}
\end{figure}

\subsection{Android application}

To test my module I have created a small Android application to simply send and
receive text files of no more than 128 bytes. Figure~\ref{fig:app} shows a
screenshot of this app and Figure~\ref{fig:blackbox} shows how my app interacts
with my project, taking the place of the top ``end user'' section. In Chapter 4
I will present the results for each of the variables I tested with various
values and discuss the trade-offs. To summarise,
\begin{itemize}
  \item{{\bf Enlarging Factor:} multiplying each frequency by 30 is sufficient
  to correct errors at a reasonable sound amplitude}
  \item{{\bf Sample Rate:} given the enlarging factor a minimum sample rate
  of 15900Hz is required, and for the Fast Fourier transform the sample rate must
  be a power of 2 multiplied by 1000. The choices are therefore 16000 and
  32000, I have chosen 16000 to keep processing time and memory space to a
  minimum}
  \item{{\bf Tone length:} for the FFT tone lengths in milliseconds must also
  be a power of 2, I tested 16, 32, 64 and 128. I selected 64msec as it performed
  far more reliably than 32msec in testing.}
  \item{{\bf Per-encode bit length:} the length of each section encoded is
  variable, I tested 1, 2, 4 and 8-bit sequences. 8-bit performed the best.}
\end{itemize}

\begin{figure}[t]
\includegraphics[width=\textwidth]{app_screenshot.png}
\caption{A screenshot of the test app designed to use the Dolphin system, with
access to the phone microphone and speakers. This is the view after the phone
has completed the decoding of a file from the audio input.}
\label{fig:app}
\end{figure}

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\subsection{Graphing tools}

To analyse the data received I used a combination of a Java graphing class and
Matlab. The Graph class uses Java's existing \texttt{Swing JPanel} to display
the graph on screen, overriding \texttt{JComponent.paintComponent(Graphics g)}.
It draws a series of lines, using the method \texttt{g2.draw(new
Line2D.Double(x1, y1, x2, y2))} the endpoint \texttt{y2} of each being the next
value in the data given (in this case the frequencies recorded).

The data given to Matlab for graphing was obtained by storing a record of 
\begin{enumerate} 
  \item the original file to be encoded;
  \item its encoded frequency values;
  \item the microphone input after transmission;
  \item and the file written based on the decoded information.
\end{enumerate}
All of these take the form of byte arrays for numerical comparison. Comparing
(1) with (4) and (2) with (3) will give a measure of how accurate each
transmission is.

\section{Extensions}

I have implemented an extension to the core requirements in the project
proposal, allowing simultaneous two-way communication between the mobile
devices. This is useful not only for simultaneous transfer of files, but also as
a method of feedback sent from the receiving phone so concepts such as backoff
signals can be implemented.

To do this I created two threads in the program to run simultaneously, one for
sending data and one for receiving. Both phones now store data from the
microphone as well as the data they have sent out. When analysing the incoming
data the phone will compare what it receives to what it is sending. If they
exactly match then the phone knows that it is the only one sending data and can
ignore it, but still progress through the array in preparation for a collision.
Once a collision occurs and it detects two frequencies it can compare them both
to the array of data it is sending and store the other in an array for foreign
frequencies. In the event both send the same frequency at the same time it
should have a significantly higher amplitude, but can opt to save it in the
array regardless. Once the phone runs out of data it is sending to compare to,
or if it stops detecting collisions as the other phone has stopped transmitting,
the foreign frequencies array can be output to the user.

\end{document}