\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\usepackage{listings}
\lstset{ %
language=Java,                % choose the language of the code
basicstyle=\footnotesize,       % the size of the fonts that are used for the code
numbers=left,                   % where to put the line-numbers
numberstyle=\footnotesize,      % the size of the fonts that are used for the line-numbers
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=single,           % adds a frame around the code
tabsize=2,          % sets default tabsize to 2 spaces
captionpos=b,           % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={\%*}{*)}          % if you want to add a comment within your code
}
\renewcommand{\lstlistingname}{Code Sample}

\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Implementation}

In this chapter I explain the implementation of Dolphin. Dolphin is a Java
program that runs on Android, accessing a mobile device's microphone for data in
to be decoded and the speakers for the output of encoded data. It also has a
socket interface to receive the data to be encoded. Figure~\ref{fig:structure}
shows how Dolphin is structured. The two main sections are encoding bits as
sounds and decoding sounds into bits. The encoding section uses data provided by
the user and accesses the speakers directly, whereas the decoding section
accesses the microphone directly and returns a sequence of bits to the user
based on what was recorded. The \texttt{Encode} class takes a byte array
representing a file and returns a byte array containing samples of a series of
sine waves. The \texttt{Decode} class takes a byte array representing recorded
microphone data, performs a conversion and returns a byte array representing
the original bytes encoded. Both then use the \texttt{Output} class, to play the
sound and save the file respectively.

\begin{figure}[t]
\includegraphics[width=\textwidth]{structure.png}
\caption{A class diagram showing the structure of Dolphin.}
\label{fig:structure}
\end{figure}

\section{Socket interface}

The socket interface is defined entirely on the device to avoid using an
Internet connection, as specified in the project proposal. To do this, Dolphin
creates a \texttt{ServerSocket}, using \emph{localhost} as the address, to
listen for connections, and returns a \texttt{Socket} to the user when a request
is made. There are very few ports that are unavailable for use in Android. HTTP
uses port 80 and HTTPS uses 443 but as in Linux, which Android uses, the bottom
1024 ports are available only to the root user, these would be unavailable
anyway. I have chosen socket 3574, which the \texttt{ServerSocket} listens to
for connections.

Once the sockets have been established the methods \texttt{getInputStream()} and
\texttt{getOutputStream()} provide the connectivity required to move data
through the socket. That input stream is then converted to a byte array using
the \texttt{readBytes(byte[] b)} method, which returns the number of bytes read
into \texttt{b}. That loops until the number of bytes received is 0. On each
iteration through the loop the contents of b are written to a
\texttt{ByteArrayOutputStream} which once the data input stops is flushed to a
byte array. Dolphin then decodes this byte array, and returns a byte array
representing a sound to the output stream.

A rooted Android device would be able to use LD\_PRELOAD to put Dolphin between
an app and the libraries it uses. This means any app which used another method
to convert bytes, and therefore used a byte stream in a function call, could use
Dolphin without changing the implementation of their app at all. This is because
LD\_PRELOAD the function call would be intercepted and replaced with the
functionality of Dolphin without calling the original library. The rest of the
calls to the original library would procede unimpeded. Similar functionality
could be achieved using \emph{iptables}. This would involve creating
an \emph{iptables rule} to intercept traffic originally going to another
encoder and redirecting it to Dolphin. Dolphin would then be able to return the
result directly.

\section{Encoding}

As I discussed in Chapter 2, there are many different ways to encode the data
as sound waves. The coding scheme I use is \emph{Frequency Shift Keying (FSK)}.
This is because amplitude and phase techniques are best suited
to binary encoding, whereas FSK can easily accommodate $n$-bit encoding with
little impact on performance. This is because even after taking into
consideration a suitable tolerance, such as not using a frequency within 20Hz of
another being used, there are several thousand frequencies in the range of a
microphone's detection available for use.

I use the binary representation of data to perform encoding, without any heuristic
analysis. This is simpler than creating different cases for different types of
file, e.g. sending text one letter at a time and images one pixel at a time
would require different implementations and an analysis of what the file
contains beforehand. I considered enhancing the encoding scheme using a
technique similar to \emph{delta frames} in video encoding, in which only the
changes from one frame to the next are sent~\cite{deltaframes}. This works for
video as sequential image frames are often very similar, so few changes are
necessary and the amount of data sent is reduced. In Dolphin it would
involve sending the differences between each sequential byte rather than
the bytes themselves. However, there is no guarantee that the bit-structure
of a generic file will yield bytes similar to their predecessors so its
performance would be varied. I therefore leave this to higher layers in the
stack, which can implement techniques like this by altering the data stream sent
into the socket.

\section{Libraries}

Using previously developed libraries in software is not only easier than
reimplementing features but is also good professional practice. This is because
it stops code duplication and reduces the risk of bugs as established
libraries will have been extensively tested already.

\subsection{Input}

The bytes comprising the data, to be sent to this method for encoding, can be
accessed using a \texttt{FileInputStream} and an existing method in the
\emph{Apache Commons} library \texttt{IOUtils} called \texttt{toByteArray}, which is applied
directly to the \texttt{FileInputStream}. Once the bit structure of the file has
been accessed in a byte array the bytes are mapped to frequencies for playback as a
sound. As Java has no unsigned types any of the bytes in the array with a 1 in
the most significant bit will be a negative number. This means a na{\"i}ve
mapping of multiplying the bytes by a suitable factor will result in negative
frequencies, which will not work. Therefore, before mapping bytes to frequencies
the signed bytes need to be converted to unsigned, using a larger type such as a
short.

\subsection{Representing sound}

In Java a sine wave can be represented by a byte array, with each array element
representing a sampled value of the function. I sample by using the Java
\emph{Math} library, shown in Code Sample~\ref{lst:sineWave}, when given
integers \texttt{freq}, \texttt{sampleRate} and \texttt{time}.

\begin{lstlisting}[caption={Encoding a sine wave in a Java byte array. The sine
value is multiplied by 127 to change the range of the result from the usual
(-1,1) to (-127,127), utilising a much wider range of
possible byte values.},label={lst:sineWave}]
public byte[] bufferSound(int freq, int time, int sampleRate) {
	byte[] sineWave = new byte[sampleRate*time];
	for (int i=0; i<sineWave.length; i++) {
		double angle = (i * freq * 2.0 * Math.PI) / sampleRate;
		sineWave[i] = (byte)(Math.sin(angle) * 127.0);
	}
	return sineWave;
}
\end{lstlisting}

This code is a method to convert an integer value to the
corresponding sound for that frequency. Each of these byte arrays can be stored
in an array of byte arrays and then output one after the other by iterating over
it. 

\subsection{Output}

To output the sound on a computer system, as this technique
is not limited to phone-phone communication, a \texttt{SourceDataLine} can be obtained for the
computer speakers using the libary \texttt{AudioSystem}.\footnotemark~A
\texttt{SourceDataLine} converts the byte data into the actual sound for
playback. The \texttt{SourceDataLine} write method can then be used to play the
byte array directly, as the processing on the array up to this point means it is in PCM format (for .wav
files), which is supported by the library. It is important to remember to call
the drain method before closing the \texttt{SourceDataLine}, in the same way you
would call flush for an output stream. Without this call the sound will not play for
the full duration of the array and data will be lost.

\footnotetext{Using the method AudioSystem.getSourceDataLine(AudioFormat),
where the AudioFormat has been initialised with the required settings for data
capture, e.g. use 8 bits per sample as we are dealing with bytes.}

\subsection{Android}

Android does not support the \texttt{AudioSystem} library, but has similar
functionality in \texttt{AudioTrack}. Previously, the \texttt{AudioFormat} input
to the \texttt{SourceDataLine} was initialised with information on the sample
rate, bits per sample, etc.
Correspondingly, an \texttt{AudioTrack} is initialised with the mostly the same
arguments using class-specific constants, and afterwards the code to play the
sound on a mobile device is almost identical but with different method names.
Conceptually, both start the object, write to the object, flush the data and
close the object again.

The way Android is written also means there are additional tasks to complete.
One such task is allowing the program to access the speaker separately in the
\emph{manifest} file, using the Android permission system. Permissions are a
security feature declared statically at compile time, designed to both alert
the user that an app has access to extra features on their device and to allow
the scheduler to more easily allocate shared resources. The relevant permission
in this case is \emph{android.permission.MODIFY\_AUDIO\_SETTINGS}.

\section{Decoding}

Decoding the sound means getting the frequency of that sound and converting the
number into a byte, based on the original encoding mapping. The frequency is the
number of cycles the sine wave representing the sound goes through per second.
As a sine wave crosses the $x$-axis twice per cycle, one way to decode
the sound would therefore be to count the number of times the wave crosses
the axis in a certain period of time. Dividing this value by 2 will reveal the
frequency. Another way to determine the frequency is to use a Fourier Transform.
This technique utilises the periodic nature of sine waves and finds the
frequency by brute force. It does this by multiplying the incoming sine wave
with unknown frequency by another sine wave with a known frequency, and
integrating the result.
If the two frequencies match then they will rise and fall at the same rate and positive values will always
multiply with positive values and negative values will always multiply with
negative values, resulting in a wave which is entirely in the positive domain.
Integrating this wave will return a large positive number. However, if the
frequencies do not match then the waves will rise and fall at different rates
and at some point positive values will be multiplied with negative values which
will return a wave with some peaks greater than 0 and some less than 0.
When this wave is integrated the negative peaks will cancel out the positive
peaks and the result will be much less than the integration on the correct
frequency. Once the transform has calculated these integrations for a series of
frequencies in the range of possible frequencies the original unknown frequency
will be the one represented by the largest value in the integrals.

I use a Fourier Transform to determine the frequency or frequencies
present rather than the counting the solution to $f(x)=0$ as counting the
solutions will not allow for multiple frequencies to be decoded at once, which
may be implemented to improve bandwidth, and the background noise could have a
very unpredictable effect on the shape of the incoming wave. There are libraries
that implement various types of transform so there is no need to rewrite this
effort. For example, Princeton University has written a single class version
which is used in the Data Analysis module of their Computer Science
course.\footnotemark~The main weakness of this implementation is that it
reallocates an array for every transform so is memory inefficient for
very large arrays. As the sample rates I use are all in the order of thousands,
each array created also contains several thousand entries and eventually there
is the risk of Java running low on memory. When Java detects the heap is very
large the garbage collector is called which slows the program down as the
garbage collector scans every object currently active in the heap looking for
those no longer referenced, which is likely to be millions of objects after so
much array allocation.

\emph{Apache} have written a Fast Fourier Transformer in
their \emph{Commons} library. This does perform the transform in place, meaning
instead of the transform using the array of values to create a new array
containing the transformed result, the input array itself is altered to
represent the new information. This means it is much more space efficient as
there will be half as many large arrays created. As I already use the
\emph{Apache Commons} library for processing incoming files this also means I
can limit the number of additional libraries I need to include in the Dolphin
code by using the same one.

\footnotetext{\emph{http://introcs.cs.princeton.edu/java/97data/FFT.java.html}}

\subsection{Microphone input}

To access the microphone on a mobile device using Android the
necessary permissions must be added to the manifest.\footnotemark~This is a
static check that Android performs for security purposes, informing the user
that the program is capable of accessing this non-standard piece of hardware.
The code uses the class \texttt{AudioRecord} to gather the data from the
microphone and a \texttt{ByteArrayOutputStream} to store it until the
transmission ends. When initialising the \texttt{AudioRecord} object it is not
necessary to use the same arguments as those used when initialising
the \texttt{AudioTrack} object for playback. This means a higher sample rate
or a larger type can be used to record the input for more reliable decoding,
whereas a lower sample rate may suffice for encoding depending on the quality of
the device audio drivers. For example, when experimenting with sample rates I
discovered that a PC running Dolphin in Java will encode a sound using 8 bits
per sample and play it perfectly, but to get the same results on a mobile device
a 16-bit short is required, else the distortion in the sound makes it unusable.

\texttt{AudioRecord.read} returns an integer stating how many
bytes were read into an array which serves as a buffer. This method can
therefore be used inside a while loop to continue reading data from the microphone until the total number of bytes read
reaches a certain value, set by the user. For example, this will be useful in
audio QR codes as the maximum size could be determined in advance. Alternatively
the read number can go unused and the while-loop condition could be a flag that
is changed when the decoding portion of the code detects that the transmission
has stopped. As with the \texttt{AudioTrack} object it is important to call stop
and release once the microphone is no longer needed or the data will not
completely flush into the buffer and no other applications will be able to use
the microphone until Dolphin closes. Dolphin itself would also be unable to use
the microphone until the lock had been released as the previous process would
still be holding the lock. The \texttt{ByteArrayOutputStream} holding the
recorded data can then be converted to a byte array and returned for analysis in
a Fourier Transform.

\footnotetext{\textless uses-permission
android:name="android.permission.RECORD\_AUDIO" /\textgreater}

\subsection{Apache Fast Fourier Transform}

Using the existing \emph{Apache} library\footnotemark~is more
reliable than implementing one of my own because it has been tested
by a large number of people, instead of just being tested by me. It also comes
with various options for the different transforms. For example, there are
separate methods already written to either transform the data and return
a new array, or to perform the transformation in place to save memory,
which is useful considering the current implementation uses 16000
bytes per byte of data encoded.\footnotemark~It also distinguishes
between a forward or inverse transform, which for my implementation
is not important, but could be useful in implementing other functionality. It
also has its own API already written which will help future developers alter
this project if necessary.

\footnotetext{\emph{http://commons.apache.org/}. The FFT is stored
under \emph{org.apache.commons.math3.transform}}
\footnotetext{For test purposes. Experiments to determine optimum sample rates
can be found in Chapter 4.}

The FFT returns an array of type \texttt{Complex}, redefined by the \emph{Apache
Commons} library. It is a datatype that stores two values, the real and complex
parts of the number, and the class has methods to retrieve these parts
separately.
Each 64ms of the sound is sent to the FFT individually to determine
which frequencies are present in that 64ms burst. Once the complex
array is returned a new array is defined of type \texttt{Double}, which for
each original element $x+iy$ the new array contains the true amplitude value
given by the formula:

\begin{equation}
amplitude = x^{2}+y^{2}
\end{equation}

Then, by iterating over the array, the frequency is determined by multiplying
the array index containing the largest amplitude by the sample rate and dividing
by twice the total size of the array. As demonstrated in
Figure~\ref{fig:mirrored} I divide by twice the array size as the FFT returns
both positive and negative frequencies in the range, which are mirrored about 0
so -300Hz has the same amplitude as 300Hz. Rounding to the nearest expected
frequency (multiples of 30) will return the intended frequency.

\begin{figure}[h]
\includegraphics[width=\textwidth]{mirrored.png}
\caption{The Fourier Transform returns both positive and negative frequencies,
with values mirrored about 0.}
\label{fig:mirrored}
\end{figure}

\section{Testing}

A certain amount of testing needed to be done during implementation to find
optimum sample rates, encode lengths and frequency spacing. Here I outline
this testing. Chapter 4 presents more detailed results.

\subsection{Testing Procedure}

Figure~\ref{fig:tests} shows a photograph of the testing environment. The sender
and receiver were set a constant distance apart so the effect of altering a
variable was not falsely assumed to succeed due to other factors, such as the
amplitude being greater (caused by a closer audio source).

\begin{figure}[h]
\includegraphics[width=\textwidth]{testing.png}
\caption{A photo of the testing setup. The phones are always a constant
distance apart to minimize extraneous factors in the initial testing.}
\label{fig:tests}
\end{figure}

\subsection{Android application}

To test my module I created an Android application to simply send and
receive text. Figure~\ref{fig:app} shows a
screenshot of this app and Figure~\ref{fig:blackbox} shows how my app interacts
with my project, taking the place of the top ``end user'' section. In Chapter 4
I present the results for each of the variables I test with various
values and discuss the trade-offs. To summarise:
\begin{itemize}
  \item{{\bf Enlarging Factor:} The accuracy of the Fourier Transform
  was not exact. Therefore, the frequencies used are spaced out
  and then the measured frequency is rounded to the nearest expected
  frequency. Multiplying each frequency by 30 is sufficient to correct errors at
  a reasonable sound amplitude. Frequencies for 8-bit sequences therefore range
  from 0\textendash7650Hz.}
  \item{{\bf Sample Rate:} given the enlarging factor a minimum sample rate
  of 15300Hz is required\footnotemark. The Apache Fast Fourier
  Transform class requires that the array be a power of 2, so the sample rate
  also needs to be a power of 2. The record methods also measure time in
  seconds rather than milliseconds so the sample rate must be a power of 2
  multiplied by 1000 to counteract the division elsewhere. The choices are
  therefore 16000 and 32000, I have chosen 32000 as it far outperformed 16000
  in accuracy tests.}
  \item{{\bf Tone length:} for the FFT tone lengths in milliseconds must also
  be a power of 2, I tested 16, 32, 64 and 128. I selected 64ms as it performed
  more reliably than 32ms in testing.}
\end{itemize}
\footnotetext{Given a maximum frequency of 7650Hz and the Nyquist-Shannon
sampling theorem.}

\begin{figure}[t]
\includegraphics[width=\textwidth]{app_screenshot.png}
\caption{A screenshot of the test app designed to use the Dolphin system, with
access to the phone microphone and speakers. This is the view after the phone
has completed the decoding of a file from the audio input.}
\label{fig:app}
\end{figure}

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\subsection{Graphing tools}

To analyse the data received I used a combination of a Java graphing class and
Matlab. The Graph class uses Java's existing \emph{Swing} \texttt{JPanel} to
display the graph on screen, overriding \texttt{JComponent.paintComponent(Graphics g)}.
It draws a series of lines, using the method \texttt{g2.draw(new
Line2D.Double(x1, y1, x2, y2))} the endpoint \texttt{y2} of each being the next
value in the data given (in this case the frequencies recorded).

The data given to Matlab for graphing was obtained by storing a record of 
\begin{enumerate} 
  \item the original file to be encoded;
  \item its encoded frequency values;
  \item the microphone input after transmission;
  \item and the file written based on the decoded information.
\end{enumerate}
All of these take the form of byte arrays for numerical comparison. Comparing
(1) with (4) and (2) with (3) will give a measure of how accurate each
transmission is.

\section{Extensions}

I have implemented an extension to the core requirements in the project
proposal, allowing simultaneous two-way communication between the mobile
devices. This is useful not only for simultaneous transfer of files, but also as
a method of feedback sent from the receiving phone so concepts such as backoff
signals can be implemented.

To do this I created two threads in the program to run simultaneously, one for
sending data and one for receiving. Both phones now store data from the
microphone as well as the data they have sent out. When analysing the incoming
data the phone will compare what it receives to what it is sending. If they
exactly match then the phone knows that it is the only one sending data and can
ignore it, but still progress through the array in preparation for a collision.
Once a collision occurs and it detects two frequencies it can compare them both
to the array of data it is sending and store the other in an array for foreign
frequencies. In the event both send the same frequency at the same time it
should have a significantly higher amplitude, but can opt to save it in the
array regardless. Once the phone runs out of data it is sending to compare to,
or if it stops detecting collisions as the other phone has stopped transmitting,
the foreign frequencies array can be output to the user.

\section{Summary}

All the required features of Dolphin as described in the proposal have been
implemented, as well as one extension extra to the original success criteria.
Namely, the encoding, decoding and socket interface. The next stage is
evaluating the effectiveness of the implementation, which is described in the
next chapter.

\end{document}